MA Thesis (Video)

During the last academic year in the course MA Interactive Digital Media of Ravensbourne College I discovered my interest in innovative applications for museums, while at the same time I had the chance to explore a plethora of  new technologies. After long hours of research and practise  as well (i.e. programming, design and user testing) I found my exact subject of interest, which is Augmented Reality iPhone applications for museums and dedicated myself to it ever since. The video below is a demonstration of the largest part of my work.

For the installation of my project at the MA Major Project Exhibition which took place on the 12-16th of July at Ravensbourne College, I designed the following poster which summarises the concept and the functionality behind the prototype applications of the Augmented Reality Suite.

Now I am working on my dissertation which is a documentation of my major project and of the research that supported its concept and implementation. The approximately 10,000 words long essay will be uploaded here by September.

Augmented Reality for Museums

The previous post was an introduction to Augmented Reality explaining the basic terminology and the techniques while this one is focusing on applications of this technology that have been hosted or are still running in cultural institutions all over the world. To my surprise I found about 10 projects, the majority of which were either experimental or for seasonal exhibitions, while there are also some exceptional cases of projects which are still hosted in museums. The projects are sorted by time beginnning with the oldest one.

♦ Click on the title to visit the project’s page or publication ♦

2003 – Virtual Dig

When I found this project I was actually amazed, because it clearly got the success factor for me: Not only it used state of the art technologies, like multi-touch tables and augmented reality, but it also made the best out of them, with a proper scenario and interaction design. For the complete journey visit the project’s site and click on the thumbnails to watch the videos.

The Seattle Art Museum and the University of Washington Human Interface Technology Laboratory recently completed a year-long collaboration to create a virtual archeological dig. The Virtual Dig ran from May 10, 2001, to August 12, 2001 as part of the Sichuan China artifact exhibit. During that time more than 25,000 people experienced this novel interactive experience.

The Virtual Dig combined HI-SPACE and ARToolKit interaction technologies along with 10 networked computers, 6 cameras, and 6 projectors. This page focuses on the interactions supported by the technologies. Each interaction is labeled as HI-SPACE or ARToolKit to identify which technology supported the specific interaction.

As a project it was more of a game, to increase attraction to cultural heritage institutions, offering a deeply engaging experience. Thus, it did not provide any additional material or information about the artefacts exhibited or about their context.

2004 – Building Virtual and Augmented Reality Museum Exhibitions

This project was web-based and its core was an online Virtual Museum, reconstructing the space of a famous museum, i.e. a corridor of Victoria and Albert museum of London, which exhibited archaeological artefacts. The Augmented Reality part was in the interaction between the user and the exhibits. If the user wanted to examine an artefact like he was holding it, he was pointing a marker to a camera input device and he could then see in the virtual museum, his hand holding the marker with the artefact superimposed, as shown below:

A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.

Very interesting concept and implementation to offer a museum visit online using three-dimensional graphics allowing the viewers to interact with the exhibits, however I do not think I would use this project more than once. For me most of those  projects which show a virtual world, instead of making me feel thrilled like I was in there, they rather make me think “How much do I want to be there!!” feeling that what I experience using them, is a rather bad copy.

2005 – The Virtual Showcase

This project was running from 2002 to 2009 in the Deutsche Museum in Bonn, Germany. It uses optical-see through display to augment virtual objects in the real world, while with a large ring all around the users are able to rotate the object as it would happen with a real showcase. It is an exceptional AR project as it is both educational and viable as it has been used for years on a permanent exhibition.

We describe the Interactive Virtual Showcase, which was developed for the interactive presentation of mixed reality scenarios in museums. We suggest a rugged design and an intuitive interaction metaphor, which is based on a tangible interface. The system was installed at a museum and is running for more than one year without major problems. The reaction of the numerous museum visitors has been very positive, which is partially due to the fact that the technology is mostly hidden from the users.

That is the kind of projects I admire, those which are not only exceptional both in conceptual and in practical level, but are also implemented in a way that they will be viable and sustainable to serve a museum in the long-term.

2007 – Mixed Reality Museum for Antikythera Mechanism

This is another project which surprised me but for different reasons. I am not aware if it was ever hosted in the museum it is made for, but it is very interesting both the number as well as the quality of the applications which constitute this project. Throughout his website Kolsouzoglou’s research and experimentation around Augmented Reality and Augmented Virtuality can be found. A summarising essay is available here here.

Well, I did not spend a long time on that rather experimental project since all the concepts implemented there had occurred to me before so I was just stunned by the amount of work and effort using special equipment and cutting-edge technologies.

2008 – An Augmented Reality Museum Guide

This one is a professional project done by the Louvre – DNP Museum Lab for Louvre and was actually running on a seasonal exhibition there on Islamic art. The technology used is a commercial product of Metaio named Unifeye SDK. The Tour Guide hosted information for the artefacts and also directions helping users to find their way in the exhibition. In addition to the actual AR Tour Guide, the Lab ran a survey and evaluation on the project’s efficiency.

Recent years have seen advances in many enabling Augmented Reality technologies. Furthermore, much research has been carried out on how Augmented Reality can be used to enhance existing applications. This paper describes our experiences with an AR-museum guide that combines some of the latest technologies. Amongst other technologies, markerless tracking, hybrid tracking, and an Ultra-Mobile-PC were used. Like existing audio guides, the AR-guide can be used by any museum visitor, during a six-month exhibition on Islamic art. We provide a detailed description of the museum’s motivation for using AR, of our experiences in developing the system, and the initial results of user surveys. Taking this information into account, we can derive possible system improvements.

A polished Augmented Reality application which shows the future of this technology in cultural institutions. The evaluation and the findings accompanying the project are also of much use. For me, however, the fact that it lasted for a seasonal exhibition only proves a possible weakness regarding sustainability. A last point, is that the project’s cost is rather high both hardware-wise (Ultra-Mobile PCs) and software-wise (Metaio license).

2008 – Bridging the Gap between the Digital and the Physical: Design and Evaluation of a Mobile Augmented Reality Guide for the Museum Visit

In the same sense with the previous project is the AR Guide for the Museum of Fine Arts in Rennes, France. Includes an Ultra-Mobile PC while for the paintings’ recognition ARToolkitPlus has been used. It also used a framework called MAGIC standing for Mobile Augmented Reality for Indoor Collections, which unfortunately cannot be found online. The concept here is that they did use marker-based tracking, but as markers they used the paintings themselves, avoiding any intervention to the actual exhibition.

Can Augmented Reality (AR) techniques inform the design and implementation of a mobile multimedia guide for the museum setting? Drawing from our experience both on previous mobile museum guides projects and in AR technology, we present a fully functional prototype of an AR-enabled mobile multimedia museum guide, designed and implemented for the Museum of Fine Arts in Rennes, France. We report on the life cycle of the prototype and the methodology employed for the AR approach as well as on the selected mixed method evaluation process; finally, the first results emerging from quantitative evaluation are discussed, supported by evidence and findings from the qualitative part of the assessment process. We conclude with lessons learned during the full circle of conception, implementation, testing and assessment of the guide.

The interface is a bit appalling for me but anyway that is rather subjective. I like the innovation of using the painting as a marker itself, although the users had to stand far enough, so that the whole painting is in the camera’s view and can be recognised. Regarding the evaluation, indeed it is very important but if you have to dress your users like troops as shown below well, I think this fact on its own affects their behaviour.

2009 – Supporting the Creation of Hybrid Museum Experiences

That is another project which used marker-based tracking for Augmented Reality applications. It was an attempt to create a tool which allows museums’ experts to develop AR experiences. The first concept implemented involves a user holding a camera-equipped mobile device and a marker next to each artwork. When the camera captures the marker the mobile device displays content relevant to the particular artwork and plays an audio narration. In addition, if the user had chosen a special audio trail, she was then given a marker and when her marker was placed next to the marker of an exhibit, a custom audio narration was triggered regarding the artefact based on the user’s preferences.

The second concept implemented involved an artwork of Kurt Schwitters and an installation constituted by a table a projector and a camera. When the users put markers on the table, these objects would appear literally on the painting as a part of the collage.

This paper presents the evolution of a tool to support the rapid prototyping of hybrid museum experiences by domain professionals. The developed tool uses visual markers to associate digital resources with physical artefacts. We present the iterative development of the tool through a user centred design process and demonstrate its use by domain experts to realise two distinct hybrid exhibits. The process of design and refinement of the tool highlights the need to adopt an experience oriented approach allowing authors to think in terms of the physical and digital “things” that comprise a hybrid experience rather than in terms of the underlying technical components.

This is a systematic approach for making the AR technology more accessible to museum experts constituted by an iterative process of three phases. I really like the fact that from the very early stage of design until the very end of production and testing, the researchers worked closely with museums experts to evaluate the viability and sustainability of the product.


This is only a subset of the Augmented Reality projects that had taken place in (or were designed for) cultural institutions all over the world and that is because the particular technology encounters almost a decade of its existence. However, it has been just a couple of years since Augmented Reality became a trend mostly because the tools which enabling this technology have become much more accessible than they were let us say four years ago, due to the phenomenal commercial success of smartphones.

Getting to know AR

It has been a while, since I am interested in Augmented Reality (AR) and by now I was exploring more its practical side experimenting with ARToolkit. Now, I decided to do a more theoretical research in an attempt to, firstly, clear up in my head what Augmented Reality, Augmented Virtuality (AV) and Mixed Reality (MR) are and then document important AR projects for museums. The latter, as this post became 1000 words long, is going to be the next post.

The Augmented Reality described here refers to visual AR, whilst for audio augmented reality  I have found the following publications:


The most comprehensive definitions of Mixed Reality, Augmented Reality and Augmented Virtuality come from Wikipedia:

Mixed Reality refers to the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real time.

Augmented Reality is a term for a live direct or indirect view of a physical real-world environment whose elements are augmented by virtual computer-generated imagery.

Augmented Virtuality refers to the merging of real world objects into virtual worlds.

In other words: Mixed Reality is a superset of both Augmented Reality and Augmented Virtuality representing all the cases that the user sees real and digital objects co-existing; Augmented Reality is when the user sees as a background the real world with digital objects superimposed and Augmented Virtuality is the reverse situation when the viewer sees a virtual world with some real objects in it. I must admit, when I cleared those terms in my mind, I felt better, like Kyle from Southpark I said to myself “You see, I’ve learned something today!” The subject of this post however is Augmented Reality alone.

Augmented Reality

With the definitions given above Augmented Reality can also be considered any application that uses the camera feed as its background. In most cases, though, when Augmented Reality is referred, it is meant that the superimposed graphics and they way they appear depend on the tracking, the localisation of the position and orientation of real physical objects. For example, in the Figure 1, the green creature appears always on top of the black square card. If that card, a real world object, is moved or rotated, so will also do the creature. There are several tracking techniques but in this post we will focus on the two most popular approaches the Marker-based and the GPS/Compass tracking.

Marker-based Tracking

Marker is usually a square black and white illustration with a thick black border and white background. Here are some examples:

Using a camera, the software recognises the position and orientation of the marker and according to that, it creates a 3D virtual world which Origin, i.e. the (0,0,0) point, is the center of the of the marker while the X and Y axes are parallel to the two sides of it and the Z axis is vertical to the plane shaped by the X-Y axes. For the ARToolkit Library in particular, one of the most popular regarding marker-based tracking, the coordinate system of the 3D world is as it is shown below:

The superimposed graphics are designed according to that coordinate system thus every move of the marker affects subsequently the graphics.

The main advantage of marker-based tracking is the accuracy as well as the stability in terms that, as long as the marker is clear in the camera’s view, the scene is solid and positioned precisely. Portability is another feature, as there is no need to make any changes on the software for changing the real-world environment.  However one major disadvantage I faced with marker-based tracking is the flickering of the superimposed graphics when the marker is “face-on”. I did try for optimisation and work-arounds to avoid it but nothing worked; I just should not point at the marker directly.

GPS/Compass Tracking

Another very popular tracking technique used on AR applications is the combination of a GPS and a compass and it is very popular in the latest technology smartphones. The concept here is fairly simple; the software has some places of interest stored as if they were on a map (longitude and latitude values for each one of them) and considering that with the GPS and the compass the software is aware of the direction the user is looking at, if there are any stored places in that area in front of the user, then the information about each one of them is displayed. For the places outside of the field of the user’s view, arrows may appear pointing to each one of these places’ direction. An indicative example is the Nearest Tube iPhone 3GS application:

The GPS/Compass tracking lacks in terms of accuracy as the GPS has ±10 meters precision as well as  of  stability because its functionality depends on the GPS-reception of the area it is used. However it has the major advantage of being marker-less and thus it can be applied without any intervention to the real-world.

Optical vs. Video see-through

Apart from the tracking method Augmented Reality applications are also categorised by the display they use, between the optical see-through and the video see-through displays. An optical see-through display employs half-silver mirror technology to allow views of physical world to pass through the lens and graphical overlay information to be reflected into the user’s eyes (Wikipedia, § Augmented Reality). An optical see-through display is usually part of a Head Mounted Display (HMD) a RoboCop like thing, which allows to whom it is wearing it, to see the real world through his or her glasses but with computer-generated graphics superimposed. The video see-through display is when the real world and the virtual world are shown in one single video stream. The Nearest Tube  shown above as well as any other AR application for handheld devices uses a video see-through display i.e. the smartphone’s screen.

Developing AR Applications

There is a plethora of free tools that can be used to develop AR applications and an extended list of them can be found in Wikipedia. Below there are some of the libraries suggested.


  • ARToolKit A Cross-platform Library for the creation of augmented reality applications, developed by Hirokazu Kato in 1999. It is maintained as an opensource project hosted on SourceForge and its newer versions since 2007 can be bought from ARToolWorks.
  • ATOMIC Authoring Tool A Cross-platform Authoring Tool software, which is a front-end for the ARToolKit library. Was developed for non-programmers, to create small and simple, Augmented Reality applications, released under the GNU GPL License.
  • OSGART A combination of ARToolKit and OpenSceneGraph.
  • SSTT Core Marker-based tracking which markers are coloured, without thick black border.


  • ARKit Open-Source Augmented Reality library for iPhone.
  • mixare – Open-Source (GPLv3) Augmented Reality Engine for Android. It works as a completely autonomous application and is available as well for the development of own implementations.
  • NyARToolkit – an ARToolkit class library released for virtual machines, particularly those which host Java, C# and Android.
  • AndAR – A native port of ARToolkit to the Android platform.


  • FLARToolKit An ActionScript 3 port of ARToolKit for Flash 9 and later.
  • SLARToolkit A Silverlight port of NyARToolkit.


Well, I did learn something that Android is a particularly AR-friendly platform. The next post is going to be on Applications of Augmented Reality for Museums.

Pictures from:

Supernatural 3D Chess

Update: Supernatural 3D Chess is now added to Softpedia’s database of games and it can be found here!

A fully functional human versus human 3D chess game was my project for the Virtual Reality class of my BSc course in Computer Science. Supernatural 3D chess is implemented with Microsoft XNA 3.0 and is available on SourceForge where it’s being downloaded about 100 times per month for almost a year now. A while ago a member of the SourceForge community and XNA developer contributed in Supernatural 3D Chess improving some existing features. Since Virtual Reality class was in the last year of my studies I didn’t have the chance to take it forward by adding Artificial Intelligence or networking so that users would be able to play over the internet. That’s also the main reason of uploading it to SourceForge, though whenever I come back to XNA development it’s the first project I will try improving.

During the game, the user clicks on a pawn and automatically all the available moves on the chessboard are highlighted. The user clicks on a square to make a move, and if this move is not allowed by the rules of chess the appropriate message appears on the upper left corner of the screen. In case the user wants to set the pawns in a specific way on the chessboard or wants to do castling or an passant, there is the feature of God mode which can be easily turned on/off. When a soldier reaches the other end of the chessboard and the God mode is off the user is asked to select to which force the soldier will be upgraded.

In terms of development, though I had previously studied about 3D graphics (for the homonym class in the third year of my undergraduate studies) and transormation matrices, it took me a while until I felt confident with it. As for the rules of chess highlighting the available moves considering the positions of all the pawns on the board etc. was the most interesting part for me.

The source code is available to download on SourceForge, please click the image above to reach the project’s page. In case you are a developer and would like to contribute, would be excellent. Enjoy 3D graphics with .NET Framework!

Gaming in Museums

It’s interesting to see how ideas evolve over time appropriating and improving existing projects. Regarding gaming in museums for educational purposes one of the first concepts was Scavenger Hunt in 2004:

(Scavenger Hunt, 2004) This production thesis is to create an interactive Scavenger Hunt game for children aged 9 to 13 as a history-learning tool in a museum setting at Chicago Historical Society. The main focus of this project is to develop a usable interface and interactivity for Pocket PC PDA using Macromedia Flash MX 2004 Professional.

The production is based on “stealth learning” and other pedagogy theories that say children can learn effectively while they play games and achieve simple goals.

The software will engage the target audience to answer correctly 10 scavenger hunt questions by finding the matching history artifacts in the museum. It will also help the target audience develop an interest in learning about history.

A more advanced approach came a year later from the cooperation of MIT and University of Wisconsin with Mystery at the Museum. The innovation of this project included location-aware information and the collaboration between visitors.

(Mystery at the museum, 2005) Through an iterative design process involving museum educators, learning scientists and technologists, and drawing upon our previous experiences in handheld game design and a growing body of knowledge on learning through gaming, we designed an interactive mystery game called Mystery at the Museum (the High Tech Whodunnit), which was designed for synchronous play of groups of parents and children over a two to three hour period. The primary design goals were to engage visitors more deeply in the museum, engage visitors more broadly across museum exhibits, and encourage collaboration between visitors. The feedback from the participants suggested that the combination of depth and breadth was engaging and effective in encouraging them to think about the museum’s exhibits. The roles that were an integral part of the game turned out to be extremely effective in engaging pairs of participants with one another. Feedback from parents was quite positive in terms of how they felt it engaged them and their children. These results suggest that further explorations of technology-based museum experiences of this type are wholly appropriate.

RFID as well as GPS technology in such concepts was introduced in 2007 at The design of Prisoner Escape from the Tower, an alternate reality game for the Tower of London. [More: Article 1, Article 2]

(The design of prisoner escape from the tower, 2007) In this design case study we describe the process by which we designed, tested, developed and ran a field trial of an interactive location aware historical game called “Prisoner Escape from the Tower”. The game uses mobile devices with GPS and Active RF transmitters and receivers to trigger events and interactions around the tower and with the Beefeaters. The game is based on authentic historical events and players help prisoners to escape by completing tasks which allow them to re-enact their actual escapes. The paper describes how the game was developed between two geographically dispersed teams and the steps involved in creating a location specific interactive game.

Claimed the first Alternate Reality Game ever to be hosted by a museum was Ghosts of a Chance in 2008.

(Ghosts of a chance, 2008) In the fall of 2008, The Smithsonian American Art Museum (SAAM) hosted an Alternate Reality Game (ARG) titled “Ghosts of a Chance.” This was the first ARG in the world to be hosted by a museum. The game offered both new and existing museum audiences a novel way of engaging with the collection in its Luce Foundation Center for American Art, a visible storage facility that displays more than 3,300 artworks in floor-to-ceiling glass cases.

Ostensibly, “Ghosts of a Chance” ( invited gamers to create objects and mail them to the museum for an ‘exhibition’ curated by two game characters posing as employees. But the ‘game within the game’ was also a challenge to uncover clues to the narrative that binds those objects, and to investigate the way objects embody histories. The game culminated on October 25 with a series of six scavenger-hunt-like “quests” designed for players of all ages. Over 6,000 players participated online and 244 people came for the onsite event.

If there is a project worth-mentioning missing please contact me. So, what’s next?

BSc Thesis (Video)

My dissertation for the BSc(Hons) in Computer Science from the Informatics dept. of University of Piraeus is a Silverlight application uploaded in Vimeo for the artistic archive of the National Theatre of Northern Greece. Microsoft Silverlight is one of my favourite technologies since it combines this designer/developer combination which clearly defines me.

The application is used as an interface for the theatre’s 13Gb database. All the controls included in the application (accordion timeline, sequential fade-in/out menus, thumbnails view etc.) were all written from scratch. I learnt much things during this project and it definitely made me a better developer. I would attach my documentation if it wasn’t in Greek, but in case you do speak Greek please contact me.

New Media for Museums

For the first term of my course we were asked to develop a research strategy and apply it on a research for the subject of our interest. We had to hand in a 3.000 words essay in which we analyse both our strategy as well as our findings.

The topic I chose is how the new media have been used to support/enhance the museum experience. My essay starts with the research strategy followed by the literature review while the third and largest part of the document is the actual study.

In the report there are two approaches of the subject, purpose-driven and medium-driven approach. At the purpose-driven approach there are several case studies regarding applications of new media in museums for learning purposes, while at the medium-driven approach there are a couple of cases analysed around the use of the mobile device in exhibition spaces.

Image from

Case study:

For my course I had to write a case study on a topic of my interest. For subject I chose the online net art community The main reason for that is that the book in which I heard about new media for the very first time and changed my life forever, was written by Mark Tribe, Rhizome’s founder. This case study gave me the chance to explore the net art movement and the way it emerged and evolved over the years, during the period when the internet was rapidly becoming the powerful medium of today.

Reflecting on that piece of work, I am not very satisfied with my performance. Reasonable since that was my first proper research text, in which there should be bibliography and references for every little piece of information mentioned in the essay. It took me a while to get used to that. Another issue was that it took me ages to finish it, because of getting distracted by all these awesome projects I stumbled upon. That’s probably the price when researching interesting topics.

Rave in my head (Video)

In the first term of my post-graduate studies at Ravensbourne I did an installation to experiment with a wide variety of technologies from MAX Jitter to non-electronic stuff like wooden structures! It was an attempt to create a multi-touch surface of water and it partially worked to my surprise. During this project I had the chance to experiment with video montage and video effects and as a result I made the following video available in Vimeo.

I used Adobe Premiere Pro as well as Adobe After Effects and I captured the video with my Canon IXUS photo-camera while cycling! That’s the main reason I chose this video to be an abstract psychedelic one – no proper footage! As a result of my first video production experimentation though, I must admit I’m OK with it!

New media for Nike

From our course we were asked to prepare a literature review on a theme of our interest.

What is a literature review?
A literature review, according to our tutor, is an account of what has been published on a topic by accredited scholars and researchers. Specificly, it lists a number of resources which contain information about the subject of the literature review, with a description and an evaluation for each one of them. The literature review usually precedes the research on the subject defined, and its value is significant since it:
  1. ensures that a research on the chosen subject hasn’t already been done
  2. justifies the research (why it is worthy a research on this subject to be done?)
  3. provides context for the research

New media and Nike
For my literature review I chose the subject “The use of new media by Nike and its reflect on Nike’s profile and profits.” for the following reasons:

  1. Nike is a world-known sportswear brand, which from its begining had an elegant and meticulous profile reflected on its products, stores and advertising campaigns.
  2. Nike is one of the first companies which realized the power of new media and took advantage of them since their very begining (websites, installations etc.). A representative example is the Human Race which will take place at the 24th of October ’09.
  3. As strange as it might sound, NikeTown is a source of inspiration for me.

Slide for NikeID

The results
To present the findings of my literature review I made a 5′ minutes presentation as we were asked to do. The powerpoint file is available below with the narration in the notes section.

♦ “The use of new media by Nike” Literature Review

In the presentation I followed the Pecha Kucha style as we were encouraged to do, by using only images for slides, without any text. The picture above is a slide of the presentation.