Presentation @ Digital Futures

I had the chance to be invited at the Digital Futures Conference hosted by Ravensbourne College to talk on Serious Games and learning. The conference featured a number of great professionals like Euan Semple and Dr. Paul Coulton from the University of Lancaster. The headline was Howard Rheingold who talked about a couple of hours via tele-conference.

I discovered games and learning in my research about a year ago when I came across SAAM’s Ghosts of a Chance project and was fascinated by the idea of using the power of games to achieve something as tough and important at the same time as learning. During my MFA studies I’ve been exploring in depth Serious and in particular Educational Games and their impact on learning, focusing always on case studies coming from the cultural heritage sector.

My ‘aha!’ moment came when I read about the so-called ‘Game Mechanics’ on which I focused my presentation at Digital Futures. Game mechanics are a set of rules intended to produce an enjoyable game experience, but has also been applied in other areas such as business.

One of the main themes of the conference was collaboration, so another point I would like to highlight from the presentation is the slide at which I talked about my personal experience of game enhanced-learning. Quoting the text that accompanied the slide that showed the Microsoft Silverlight forum:

As a developer myself I find the online forums very important and whenever I face a serious issue myself I post a question online and in one or two hours people from allover the world respond to solve my (!) problem. And I did wonder many times; why do all these people bother? why do they spend their time to share their knowledge? Well for a little while, I used to be one of those. When I used to develop with Microsoft Silverlight at some point I felt I had to give back to the community that helped me, by helping others. But being honest waht really got me going, where the game mechanics of the website and for two months I tried very hard to get 700 points and get promoted from a “Member” to a “Participant”. As you can see, I never managed to do so, but by the time I got bored I had already helped in the solution of  a few hundreds of questions.

I also got a nice tweet about it!

I got a few complimenting tweets for my presentation but once Rheingold was on, I was rather forgotten! The networking was good as well, and for me in any case it was a great experience as I always wanted to present at a conference. Bling! +5 Achievement Points!

My MFA thesis is on Shakespeare’s Globe Theatre in London and combines all of my interests: Augmented Reality, museums and game mechanics. More to come from that one!

Augmented Reality Workshop

Last week I hosted a workshop on Augmented Reality for the post-graduate students of Ravensbourne College. It was divided in three major parts; the definition of what is Augmented Reality and what is not, then the development of a couple of Augmented Reality applications based on the FLARManager software library and the last part was a brief examination of platform-specific Augmented Reality tools.

The workshop demos required only the Adobe Flash Builder, as the attendees had to bring their own laptops (Mac and PCs) and Adobe Flash is the only cross-platform Augmented Reality solution development-wise.

Please find below the presentation and also the document that was handed out:

Click to enlarge

ar-workshop-doc

MA Dissertation (A+)

My thesis for the MA Interactive Digital Media apart from the deployment of the Augmented Reality Suite presented in the previous post, includes also an in depth research of the augmented reality and the way it has been applied to the cultural heritage sector.

arsci-header

The abstract:

This is a report for the project The Augmented Reality Suite for Cultural Institutions [ARS:CI] and its accompanying research. ARS:CI is a software suite comprising of three Apple iPhone applications that take advantage of Augmented Reality to enhance the learning experience in the museum context. The potential of Augmented Reality for cultural heritage institutions has been the subject of research for over a decade, however ARS:CI is one of the first projects, which aim at the development of an Augmented Reality solution for museums and galleries that does not include custom-made or expensive hardware, such as Ultra-PCs, but instead takes advantage of the smartphones on the market. The research methods followed were mainly desktop research as well as interviews with professionals coming from the cultural heritage sector. During the software development process user-testing was involved to retrieve feedback from the users, combined with desktop research on techniques that would optimise the performance of the applications. The study proved that despite the large amount of research in the area of augmented reality for museums, there is only a minority of sustainable solutions, which was eventually appropriated for permanent exhibitions, mainly because of the cost of purchasing and maintaining the designed Augmented Reality systems. The feedback ARS:CI received was positive and professionals with a background in the Cultural Heritage sector as well as Augmented Reality specialists, found in it great potential, as a tool that is able to transform the learning process in a compelling experience.

The introduction of the document can be found below. If you are interested in the full document please e-mail me. In addition, since the subject deals with the new media, content such as videos is essential. For that reason, wherever the icon on the right appears in the text, it indicates that there is a relevant video available in the Video Reel below.

Video Reel

The videos accompanying the dissertation can be found below. They appear in the order in which they are mentioned in the text i.e. “The ARS:CI”, “The Virtual Dig”, “An Augmented Reality Museum Guide” and “Mixed Reality for the Natural History Museum in Japan”.


The ARS:CI

The Virtual Dig


The Room


A Magical Start


Brushing


Examining Tusks


Removing Tusks


Examining Masks


Finding Statue


Combining Statue


Fitting Objects

An Augmented Reality Museum Guide

Mixed Reality for the Natural History Museum in Japan

Kondo


When I joined that masters course all I knew was that I like the new media in general and a year later, I find myself in full awareness of the subject of my interest. A zillion thanks to all the people related or unrelated to Ravensbourne who helped me throughout the last academic year.

MA Thesis (Video)

During the last academic year in the course MA Interactive Digital Media of Ravensbourne College I discovered my interest in innovative applications for museums, while at the same time I had the chance to explore a plethora of  new technologies. After long hours of research and practise  as well (i.e. programming, design and user testing) I found my exact subject of interest, which is Augmented Reality iPhone applications for museums and dedicated myself to it ever since. The video below is a demonstration of the largest part of my work.

For the installation of my project at the MA Major Project Exhibition which took place on the 12-16th of July at Ravensbourne College, I designed the following poster which summarises the concept and the functionality behind the prototype applications of the Augmented Reality Suite.

Now I am working on my dissertation which is a documentation of my major project and of the research that supported its concept and implementation. The approximately 10,000 words long essay will be uploaded here by September.

Augmented Reality for Museums

The previous post was an introduction to Augmented Reality explaining the basic terminology and the techniques while this one is focusing on applications of this technology that have been hosted or are still running in cultural institutions all over the world. To my surprise I found about 10 projects, the majority of which were either experimental or for seasonal exhibitions, while there are also some exceptional cases of projects which are still hosted in museums. The projects are sorted by time beginnning with the oldest one.

♦ Click on the title to visit the project’s page or publication ♦

2003 – Virtual Dig

When I found this project I was actually amazed, because it clearly got the success factor for me: Not only it used state of the art technologies, like multi-touch tables and augmented reality, but it also made the best out of them, with a proper scenario and interaction design. For the complete journey visit the project’s site and click on the thumbnails to watch the videos.

The Seattle Art Museum and the University of Washington Human Interface Technology Laboratory recently completed a year-long collaboration to create a virtual archeological dig. The Virtual Dig ran from May 10, 2001, to August 12, 2001 as part of the Sichuan China artifact exhibit. During that time more than 25,000 people experienced this novel interactive experience.

The Virtual Dig combined HI-SPACE and ARToolKit interaction technologies along with 10 networked computers, 6 cameras, and 6 projectors. This page focuses on the interactions supported by the technologies. Each interaction is labeled as HI-SPACE or ARToolKit to identify which technology supported the specific interaction.

As a project it was more of a game, to increase attraction to cultural heritage institutions, offering a deeply engaging experience. Thus, it did not provide any additional material or information about the artefacts exhibited or about their context.

2004 – Building Virtual and Augmented Reality Museum Exhibitions

This project was web-based and its core was an online Virtual Museum, reconstructing the space of a famous museum, i.e. a corridor of Victoria and Albert museum of London, which exhibited archaeological artefacts. The Augmented Reality part was in the interaction between the user and the exhibits. If the user wanted to examine an artefact like he was holding it, he was pointing a marker to a camera input device and he could then see in the virtual museum, his hand holding the marker with the artefact superimposed, as shown below:

A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.

Very interesting concept and implementation to offer a museum visit online using three-dimensional graphics allowing the viewers to interact with the exhibits, however I do not think I would use this project more than once. For me most of those  projects which show a virtual world, instead of making me feel thrilled like I was in there, they rather make me think “How much do I want to be there!!” feeling that what I experience using them, is a rather bad copy.

2005 – The Virtual Showcase

This project was running from 2002 to 2009 in the Deutsche Museum in Bonn, Germany. It uses optical-see through display to augment virtual objects in the real world, while with a large ring all around the users are able to rotate the object as it would happen with a real showcase. It is an exceptional AR project as it is both educational and viable as it has been used for years on a permanent exhibition.

We describe the Interactive Virtual Showcase, which was developed for the interactive presentation of mixed reality scenarios in museums. We suggest a rugged design and an intuitive interaction metaphor, which is based on a tangible interface. The system was installed at a museum and is running for more than one year without major problems. The reaction of the numerous museum visitors has been very positive, which is partially due to the fact that the technology is mostly hidden from the users.

That is the kind of projects I admire, those which are not only exceptional both in conceptual and in practical level, but are also implemented in a way that they will be viable and sustainable to serve a museum in the long-term.

2007 – Mixed Reality Museum for Antikythera Mechanism

This is another project which surprised me but for different reasons. I am not aware if it was ever hosted in the museum it is made for, but it is very interesting both the number as well as the quality of the applications which constitute this project. Throughout his website Kolsouzoglou’s research and experimentation around Augmented Reality and Augmented Virtuality can be found. A summarising essay is available here here.

Well, I did not spend a long time on that rather experimental project since all the concepts implemented there had occurred to me before so I was just stunned by the amount of work and effort using special equipment and cutting-edge technologies.

2008 – An Augmented Reality Museum Guide

This one is a professional project done by the Louvre – DNP Museum Lab for Louvre and was actually running on a seasonal exhibition there on Islamic art. The technology used is a commercial product of Metaio named Unifeye SDK. The Tour Guide hosted information for the artefacts and also directions helping users to find their way in the exhibition. In addition to the actual AR Tour Guide, the Lab ran a survey and evaluation on the project’s efficiency.

Recent years have seen advances in many enabling Augmented Reality technologies. Furthermore, much research has been carried out on how Augmented Reality can be used to enhance existing applications. This paper describes our experiences with an AR-museum guide that combines some of the latest technologies. Amongst other technologies, markerless tracking, hybrid tracking, and an Ultra-Mobile-PC were used. Like existing audio guides, the AR-guide can be used by any museum visitor, during a six-month exhibition on Islamic art. We provide a detailed description of the museum’s motivation for using AR, of our experiences in developing the system, and the initial results of user surveys. Taking this information into account, we can derive possible system improvements.

A polished Augmented Reality application which shows the future of this technology in cultural institutions. The evaluation and the findings accompanying the project are also of much use. For me, however, the fact that it lasted for a seasonal exhibition only proves a possible weakness regarding sustainability. A last point, is that the project’s cost is rather high both hardware-wise (Ultra-Mobile PCs) and software-wise (Metaio license).

2008 – Bridging the Gap between the Digital and the Physical: Design and Evaluation of a Mobile Augmented Reality Guide for the Museum Visit

In the same sense with the previous project is the AR Guide for the Museum of Fine Arts in Rennes, France. Includes an Ultra-Mobile PC while for the paintings’ recognition ARToolkitPlus has been used. It also used a framework called MAGIC standing for Mobile Augmented Reality for Indoor Collections, which unfortunately cannot be found online. The concept here is that they did use marker-based tracking, but as markers they used the paintings themselves, avoiding any intervention to the actual exhibition.

Can Augmented Reality (AR) techniques inform the design and implementation of a mobile multimedia guide for the museum setting? Drawing from our experience both on previous mobile museum guides projects and in AR technology, we present a fully functional prototype of an AR-enabled mobile multimedia museum guide, designed and implemented for the Museum of Fine Arts in Rennes, France. We report on the life cycle of the prototype and the methodology employed for the AR approach as well as on the selected mixed method evaluation process; finally, the first results emerging from quantitative evaluation are discussed, supported by evidence and findings from the qualitative part of the assessment process. We conclude with lessons learned during the full circle of conception, implementation, testing and assessment of the guide.

The interface is a bit appalling for me but anyway that is rather subjective. I like the innovation of using the painting as a marker itself, although the users had to stand far enough, so that the whole painting is in the camera’s view and can be recognised. Regarding the evaluation, indeed it is very important but if you have to dress your users like troops as shown below well, I think this fact on its own affects their behaviour.

2009 – Supporting the Creation of Hybrid Museum Experiences

That is another project which used marker-based tracking for Augmented Reality applications. It was an attempt to create a tool which allows museums’ experts to develop AR experiences. The first concept implemented involves a user holding a camera-equipped mobile device and a marker next to each artwork. When the camera captures the marker the mobile device displays content relevant to the particular artwork and plays an audio narration. In addition, if the user had chosen a special audio trail, she was then given a marker and when her marker was placed next to the marker of an exhibit, a custom audio narration was triggered regarding the artefact based on the user’s preferences.

The second concept implemented involved an artwork of Kurt Schwitters and an installation constituted by a table a projector and a camera. When the users put markers on the table, these objects would appear literally on the painting as a part of the collage.

This paper presents the evolution of a tool to support the rapid prototyping of hybrid museum experiences by domain professionals. The developed tool uses visual markers to associate digital resources with physical artefacts. We present the iterative development of the tool through a user centred design process and demonstrate its use by domain experts to realise two distinct hybrid exhibits. The process of design and refinement of the tool highlights the need to adopt an experience oriented approach allowing authors to think in terms of the physical and digital “things” that comprise a hybrid experience rather than in terms of the underlying technical components.

This is a systematic approach for making the AR technology more accessible to museum experts constituted by an iterative process of three phases. I really like the fact that from the very early stage of design until the very end of production and testing, the researchers worked closely with museums experts to evaluate the viability and sustainability of the product.

Summary

This is only a subset of the Augmented Reality projects that had taken place in (or were designed for) cultural institutions all over the world and that is because the particular technology encounters almost a decade of its existence. However, it has been just a couple of years since Augmented Reality became a trend mostly because the tools which enabling this technology have become much more accessible than they were let us say four years ago, due to the phenomenal commercial success of smartphones.

Getting to know AR

It has been a while, since I am interested in Augmented Reality (AR) and by now I was exploring more its practical side experimenting with ARToolkit. Now, I decided to do a more theoretical research in an attempt to, firstly, clear up in my head what Augmented Reality, Augmented Virtuality (AV) and Mixed Reality (MR) are and then document important AR projects for museums. The latter, as this post became 1000 words long, is going to be the next post.

The Augmented Reality described here refers to visual AR, whilst for audio augmented reality  I have found the following publications:

Definitions

The most comprehensive definitions of Mixed Reality, Augmented Reality and Augmented Virtuality come from Wikipedia:

Mixed Reality refers to the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real time.

Augmented Reality is a term for a live direct or indirect view of a physical real-world environment whose elements are augmented by virtual computer-generated imagery.

Augmented Virtuality refers to the merging of real world objects into virtual worlds.

In other words: Mixed Reality is a superset of both Augmented Reality and Augmented Virtuality representing all the cases that the user sees real and digital objects co-existing; Augmented Reality is when the user sees as a background the real world with digital objects superimposed and Augmented Virtuality is the reverse situation when the viewer sees a virtual world with some real objects in it. I must admit, when I cleared those terms in my mind, I felt better, like Kyle from Southpark I said to myself “You see, I’ve learned something today!” The subject of this post however is Augmented Reality alone.

Augmented Reality

With the definitions given above Augmented Reality can also be considered any application that uses the camera feed as its background. In most cases, though, when Augmented Reality is referred, it is meant that the superimposed graphics and they way they appear depend on the tracking, the localisation of the position and orientation of real physical objects. For example, in the Figure 1, the green creature appears always on top of the black square card. If that card, a real world object, is moved or rotated, so will also do the creature. There are several tracking techniques but in this post we will focus on the two most popular approaches the Marker-based and the GPS/Compass tracking.

Marker-based Tracking

Marker is usually a square black and white illustration with a thick black border and white background. Here are some examples:

Using a camera, the software recognises the position and orientation of the marker and according to that, it creates a 3D virtual world which Origin, i.e. the (0,0,0) point, is the center of the of the marker while the X and Y axes are parallel to the two sides of it and the Z axis is vertical to the plane shaped by the X-Y axes. For the ARToolkit Library in particular, one of the most popular regarding marker-based tracking, the coordinate system of the 3D world is as it is shown below:

The superimposed graphics are designed according to that coordinate system thus every move of the marker affects subsequently the graphics.

The main advantage of marker-based tracking is the accuracy as well as the stability in terms that, as long as the marker is clear in the camera’s view, the scene is solid and positioned precisely. Portability is another feature, as there is no need to make any changes on the software for changing the real-world environment.  However one major disadvantage I faced with marker-based tracking is the flickering of the superimposed graphics when the marker is “face-on”. I did try for optimisation and work-arounds to avoid it but nothing worked; I just should not point at the marker directly.

GPS/Compass Tracking

Another very popular tracking technique used on AR applications is the combination of a GPS and a compass and it is very popular in the latest technology smartphones. The concept here is fairly simple; the software has some places of interest stored as if they were on a map (longitude and latitude values for each one of them) and considering that with the GPS and the compass the software is aware of the direction the user is looking at, if there are any stored places in that area in front of the user, then the information about each one of them is displayed. For the places outside of the field of the user’s view, arrows may appear pointing to each one of these places’ direction. An indicative example is the Nearest Tube iPhone 3GS application:

The GPS/Compass tracking lacks in terms of accuracy as the GPS has ±10 meters precision as well as  of  stability because its functionality depends on the GPS-reception of the area it is used. However it has the major advantage of being marker-less and thus it can be applied without any intervention to the real-world.

Optical vs. Video see-through

Apart from the tracking method Augmented Reality applications are also categorised by the display they use, between the optical see-through and the video see-through displays. An optical see-through display employs half-silver mirror technology to allow views of physical world to pass through the lens and graphical overlay information to be reflected into the user’s eyes (Wikipedia, § Augmented Reality). An optical see-through display is usually part of a Head Mounted Display (HMD) a RoboCop like thing, which allows to whom it is wearing it, to see the real world through his or her glasses but with computer-generated graphics superimposed. The video see-through display is when the real world and the virtual world are shown in one single video stream. The Nearest Tube  shown above as well as any other AR application for handheld devices uses a video see-through display i.e. the smartphone’s screen.

Developing AR Applications

There is a plethora of free tools that can be used to develop AR applications and an extended list of them can be found in Wikipedia. Below there are some of the libraries suggested.

Desktop

  • ARToolKit A Cross-platform Library for the creation of augmented reality applications, developed by Hirokazu Kato in 1999. It is maintained as an opensource project hosted on SourceForge and its newer versions since 2007 can be bought from ARToolWorks.
  • ATOMIC Authoring Tool A Cross-platform Authoring Tool software, which is a front-end for the ARToolKit library. Was developed for non-programmers, to create small and simple, Augmented Reality applications, released under the GNU GPL License.
  • OSGART A combination of ARToolKit and OpenSceneGraph.
  • SSTT Core Marker-based tracking which markers are coloured, without thick black border.

Mobile

  • ARKit Open-Source Augmented Reality library for iPhone.
  • mixare – Open-Source (GPLv3) Augmented Reality Engine for Android. It works as a completely autonomous application and is available as well for the development of own implementations.
  • NyARToolkit – an ARToolkit class library released for virtual machines, particularly those which host Java, C# and Android.
  • AndAR – A native port of ARToolkit to the Android platform.

Web

  • FLARToolKit An ActionScript 3 port of ARToolKit for Flash 9 and later.
  • SLARToolkit A Silverlight port of NyARToolkit.

Next

Well, I did learn something today..like that Android is a particularly AR-friendly platform. The next post is going to be on Applications of Augmented Reality for Museums.

Pictures from:
http://takemetoyourleader.com/page/11/
http://en.wikipedia.org/wiki/File:Knightmarecorridorofcatacombs.jpg
http://www.artoolworks.com/support/library/Creating_and_training_new_ARToolKit_markers
http://roarmot.co.nz/ar/
http://www.hitl.washington.edu/artoolkit/documentation/cs.htm

Supernatural 3D Chess

Update: Supernatural 3D Chess is now added to Softpedia’s database of games and it can be found here!

A fully functional human versus human 3D chess game was my project for the Virtual Reality class of my BSc course in Computer Science. Supernatural 3D chess is implemented with Microsoft XNA 3.0 and is available on SourceForge where it’s being downloaded about 100 times per month for almost a year now. A while ago a member of the SourceForge community and XNA developer contributed in Supernatural 3D Chess improving some existing features. Since Virtual Reality class was in the last year of my studies I didn’t have the chance to take it forward by adding Artificial Intelligence or networking so that users would be able to play over the internet. That’s also the main reason of uploading it to SourceForge, though whenever I come back to XNA development it’s the first project I will try improving.

During the game, the user clicks on a pawn and automatically all the available moves on the chessboard are highlighted. The user clicks on a square to make a move, and if this move is not allowed by the rules of chess the appropriate message appears on the upper left corner of the screen. In case the user wants to set the pawns in a specific way on the chessboard or wants to do castling or an passant, there is the feature of God mode which can be easily turned on/off. When a soldier reaches the other end of the chessboard and the God mode is off the user is asked to select to which force the soldier will be upgraded.

In terms of development, though I had previously studied about 3D graphics (for the homonym class in the third year of my undergraduate studies) and transormation matrices, it took me a while until I felt confident with it. As for the rules of chess highlighting the available moves considering the positions of all the pawns on the board etc. was the most interesting part for me.

The source code is available to download on SourceForge, please click the image above to reach the project’s page. In case you are a developer and would like to contribute, would be excellent. Enjoy 3D graphics with .NET Framework!