Journeys in an Augmented Reality - Digital Learning

Thursday 15 March 2012

Journeys in an Augmented Reality

An early AR experiment, built using BuildAR. 
A couple of years ago Augmented Reality (AR) was the next ‘big thing’ to hit the internet. It was a stop and gawp technology, show your friends and colleagues, and talk about how you could use it for your next big idea (in my case, in teaching).

If you don’t know what I’m on about Wikipedia has a definition: 

Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer generated sensory input such as sound, video, graphics or GPS data.”

If you are still not sure, the best thing is to try it for yourself. You’ll need a computer with a webcam, and a printer. This GE example was one of the first sites to get AR noticed.

The example above was typical of the technology a few year’s ago. Since then, in this form it has since become more mainstream, but is now hidden and integrated into popular devices. Smartphones (Apple and Android) have lots of apps that use the technology (see Starwalk as a popular example). If you’ve ever leapt about your living room playing on an Xbox Kinect, or PlayStation EyeToy you’ve also experienced it, but in a much more sophisticated way. So what happened to it for learning and teaching? Well, in my experience it turns out that the path to creating your own AR applications is tricky, expensive and frustrating, but offset with the possibility of creating engaging, interactive  learning objects that leap off the screen.

Let me explain.

Soon after playing the the GE Augmented Reality demo I realised that dentistry might make a good fit for the technology. I teach 1st year students tooth morphology, and at the end of a long lecture I expect them to be able to identify all of the adult teeth in isolation. To do this, the lecture is team taught (with one lecturer being the knowledgeable tutor, the other being a stooge, pretending not to know how to identify the teeth and making notes on an OHP that the students copy). In the lecture, models of the teeth are thrown into the audience, to be thrown back with the tooth identified. The latter half of the lecture is a group task where the students have to arrange models of teeth into the correct orientation and order in order before they are permitted to leave (a good motivator I’ve found). It is the models of the teeth I use that the students covet the most, but due to their cost I cannot provide students with them (and they are made in-house, so they can’t buy them either). If I could provide the models to the students digitally, as 3D AR models, I would be able to let the students recreate the practical elements of my lecture at home.

The ‘vision’ was to be able to provide an Augmented Reality experience of tooth morphology to students that they could use to learn and revise the teeth. 



Making an Augmented Reality

A quick examination of most of the AR apps on the web showed them to be using the same underlying code, by a Japanese coder by the name of Saqoosha. His
FLARToolKit was the first ‘kit’ of code to allow the creation of AR applications within Flash and webpages and is offered for free. The installation and setting up of the code is quite complicated, and has inspired many YouTube tutorials on how to do it (none of them short). FLARToolKit does offer ‘in the browser’ Augmented Reality, but it is not fast or smooth to render the image, and of course requires the Flash browser plugin (which means the iPad is out). If Flashscript is not at the tip of your fingers, there are some alternatives that work by having a downloadable program to set up the AR, and then the user downloads a viewer to see the images. This is currently the easiest way to do it, and I recommend the BuildAR free version if you want to try it out. 


Making models


The next big hurdle with making AR is getting the models into the computer that you want to augment reality with. There are a few options, and I started initially with the free amateur methods that turned out to be clumsy, and not that effective (although they did work). These methods ‘stitched’ photographs from set angles together to calculate depth, and a 3D mesh is calculated. For models of molecules, something like ChemDraw or Jmol allow you to build from scratch, and Google Sketchup will let you build simple objects (and has an AR viewer plugin too). In my case, I needed teeth, and these are not easy to draw in 3D. Talking to colleagues, most also had real things (in cupboards usually) they wanted to present virtually (e.g. a larynx, beetles, bones etc) and it would be easiest if these could be scanned into 3D, and then imported into AR.

With the support of the Faculty of Medicine, Dentistry and Health I obtained a NextEngine 3D Laser Scanner, which scans objects from a few millimetres across, up to large surfaces (e.g. walls,  archeological graves). I soon managed to produce scans of my tooth models, and imported them quite quickly into the BuildAR tool to experiment. At this point I started to  capture the interest of students and colleagues as it is quite a visual and “wow” technology. The project was subsequently demonstrated to CiCS (Christine Sexton’s blog), and was featured in the internal University e-zine Overview


Publishing

By mid-2011 I had produced scans of the model teeth, and started to look at the next phase of the project which was to build an AR website that would support the teaching of tooth morphology with as few barriers to entry for students as possible. At this point, there is a trade-off: downloadable ‘apps’ to view the AR content could provide fast, smooth multi-model (multi-teeth in my case) visuals, but only to those that had the right machine (e.g. a Windows PC) and the desire to install the right software. The alternative is to run it in a browser with Flash, which means single models and as mentioned above, jerkier playback. The project stalled at this point, as it was clear that to create AR that would work for most students, and also that was easy to author (for colleagues who would want to duplicate this project) was not yet possible. I was lucky at this point as a company called Aurasma (who are owned by HP) visited the University to demo their smartphone based AR authoring and viewing tool. This offers the ability to create AR very rapidly using pictures as triggers, to place video or images over objects around you. A cover of a text book or a sign can trigger the playing of a video as though it was on that surface, and this is all authorable very simply within a free downloadable app (search for Aurasma on the Apple Appstore or Android Market). With the free developer kit, 3D models can be included into scenes, so it should be possible to easily have my virtual teeth popping out of some handouts. Well, so far I haven’t managed to make it work in a way that is easy for others to replicate. The reason? How the 3D models are stored.

Just like text documents, 3D model files come in different types. There are the simple mesh files that contain only the shape of the object (the text equivalent would be a text .txt file), or they can include formatting data like the textures (colours) as a map to wrap around the object to make it look realistic (text equivalent would be a Word .doc file). The latter come in many types, with the popular ones for AR being OBJ and DAE. The problem is Aurasma will only import DAE files (and only a certain type of these) and the 3D scanner doesn’t save in that format, so you need to convert the files - and 3D object editing software is extremely complicated for the novice user and it is not simply a case of opening the file and selecting ‘Save As’. For the experts there is 3D Max and Maya (thousands of pounds for a licence) if you want to model in 3D, or there are free options such as Blender and Wings 3D. All are complicated, and in order to transform a model for use in AR requires the helping hand of a 3D modelling expert (at least until you can establish a workflow of buttons to press to get the desired result). 


Thoughts on Augmented Reality for education

Having achieved a successful import into Aurasma of videos and a 3D object (a duck!) I have started to explore the implications of the technology for education and teaching. After all this effort I’m coming to the following conclusions:

  • While the workflow to obtain a 3D model to display in Augmented Reality is achievable by a relative novice to 3D modelling, the lack of file compatibility between the scanner and the newer AR applications means it is still a technology not yet ready for widespread use.
  • The barriers to entry for a student wishing to engage with AR are still high (computer, camera, correct software installed). Smartphones have made it easier.
  • For accessing video and 3D models, AR provides an engaging access route, but not a practical one. The technology has been developed aggressively for marketing and advertising, where 3D models or video triggered in AR by a magazine ad is a throwaway novelty. For learning, students will want quick access to the video/model when they want it, and not necessarily by having to hold up their phone or webcam to get it. A link to the video on YouTube would be much more practical.
So where next? I am currently exploring other delivery methods for getting my 3D models virtually to students. The new iBooks Authoring application from Apple for the iPad has the facility to import 3D files, and they can be wrapped in with text and pictures to form an interactive book. I’m currently putting the finishing touches to my first pilot of this technology and hope to report my findings in a future blog post.

If you have been, or are considering, experimenting with AR, please get in touch through the comments.

1 comment:

  1. Hi Chris,
    amazing stuff - like yourself I've been interested in VR for a while, and more recently, AR. I'm particularly interested in how it can be used within archaeology - I already have some models made up, and I think I have them in DXF format which should be a good starting point. I'm particularly keen to bring them into a mobile environment so people can experience reconstructions of archaeological monuments whilst they're visiting the real things, so am keen to look at something like Aurasma for this. Perhaps I can come and have a chat sometime about how to progress? Graham

    ReplyDelete