Mixed Reality

Mixed Reality Overview

During this project, we will be formally exploring the concept of mixed reality. Using Paul Milgram’s theory of the Reality-Virtuality continuum we will consider how certain works of art and design fit within the spectrum of mixed reality. After becoming familiar with the concept, you will be asked to design an augmented reality (AR) experience and then develop your project in Unity3D.

Prior to committing to the build of your projects, you will be asked to pitch your concepts to members of the lecturing staff. The pitch should be supported with either detailed graphic storyboards or scripts that outline the user journey in relation to your proposed work.

Definition of Mixed Reality:

The virtuality continuum is a continuous scale ranging between the completely virtual, a virtuality, and the completely real, reality. The reality–virtuality continuum therefore encompasses all possible variations and compositions of real and virtual objects.

(https://en.wikipedia.org/wiki/Reality%E2%80%93virtuality_continuum ) 

Review technology landscape

After receiving the mixed reality brief, we will demo some mixed reality applications. We will look at some AR, VR and 360-degree video applications and discuss how these relates to the virtuality-continuum.


During the following two sessions, we will go through the process of creating custom augmented reality (AR) applications. You will be introduced to 3D model-making using photogrammetry and software such as Agisoft’s Photo-scan, Adobe’s Mixamo and Houdini. You will also be shown how to create animations and user interaction using Unity3D. We will also go through the process of how to publish completed applications for iOS and Android devices.


Interim Pitch – Five-minute concept pitch (research, ideas, communication of concept).

Final project presentation/demonstration.

Work Mode



Software covered in session 1:

Unity 3D                                   https://unity3d.com/

Agisoft Photoscan-          http://www.agisoft.com/

Vuforia –SDK                       https://www.vuforia.com


Software covered in session 2:

Adobe Mixamo.                 https://www.mixamo.com/

Side FX / Houdini.             https://www.sidefx.com/products/houdini-fx/

Reference material


Milgram Reality Virtuality continuum – https://en.wikipedia.org/wiki/Reality%E2%80%93virtuality_continuum



Dual Reality (MIT LAB) http://resenv.media.mit.edu/pubs/papers/2009-07-fave2009.pdf

Mixed Reality (Merging Real and Virtual Worlds) http://www.files.tachilab.org/publications/others/tachi1999.pdf


Sci-Fi Books

Snow Crash http://hell.pl/agnus/anglistyka/2211/Neal%20Stephenson%20-%20Snow%20Crash.pdf

Ready Player One https://www.mcleanandeakin.com/sites/mcleanandeakin.com/files/Ready%20Player%20One.pdf



Yucca Invest http://olapehrsonfoundation.org/work/yucca-invest-trading-plant/

Pendulum TV http://www.dieter-kiessling.de/pendel-TV.htm

AR Gallery https://www.artsy.net/article/artsy-editorial-this-augmented-reality-app-reveals-art-in-public-spaces


Immersive theatre

Punch Drunk – https://www.punchdrunk.org.uk/

Punchdrunk – The Drowned Man: A Hollywood Fable

Secret Cinema https://www.secretcinema.org/about


Students are expected to create an AR application for both Android and iOS devices. Students should clearly identify their target audience prior to designing and developing the applications. This will require students to conduct short interviews with members of the target audience. A user-journey should also be produced. During the production of the AR application students will be able to incorporate both 3D and 2D graphic work into their project as well as video, photography and audio.

Learning Outcomes

  • Understand the concept underpinning mixed-reality and position work in relation to Milgram continuum matrix.
  • Design and build a simple interactive digital artefact using routine computational techniques and practices.
  • Demonstrate an understanding of contemporary forms of linear and non-linear narrative mechanisms used in digital content.
  • Generate, visualise and pitch a creative concept to an audience of peers and staff.
  • Appraise aesthetic components and navigation structures in interactive screen-based imagery and installations.
  • Demonstrate the ability to research the potential application for the field.

Assessment Criteria

You will be assessed on your ability to:

  • Demonstrate creative insight into the realisation of the final work.
  • Demonstrate awareness of action and reaction between audience and content.
  • Demonstrate ability to conduct research beyond the core module contents.
  • Demonstrate skill in creating a compelling and coherent pitch that clearly expresses your vision of the final piece.
  • Utilise technology in the creation of your piece.
  • Demonstrate the ability to immerse the viewer in your narrative.

Submission Details

  • Learning journal in the form of blog/Sketch book work / evidence of concept exploration including documentation of experiments.
  • User-journey or storyboard
  • The “pitch” presentation of concept
  • AR application using Unity and Vuforia
  • Video documentation of the AR application incorporating a recording of the working outcome. This could be done using Android or similar. (A nice example of a third year student’s work can be found here https://vimeo.com/262115502)

Create a folder called “Mixed_Reality_Continuum_yourname”. Within this folders create 3 additional folder: ‘Final ‘, ‘Studies’ and ‘Pitch’. Put the appropriate work in each folder and submit the zipped file.

Your final submission will be a link to this file.

Monday 16th April 2018

Project Brief Run-Through Notes

Virtuality Continuum

Real World -> Augmented Reality -> Augmented Virtuality -> Virtual Reality

Secret cinema

Punch Dunk

360° video

Photo-grammetry – Generating 3D models through a series of individual photographs


AR Kit for iPhone

Vuforia for Unity

To make something AR you just need to move the object into an ImageTarget

Output Unity project into .xcodeproj so it can be put on iPhone

Tuesday 17th April 2018


Sean shared this article to the course page:

AI has created some creepy but kind of beautiful nude art.

To start, Barret fed a Generative Adversarial Network (GAN) thousands of nude portraits from across different centuries and strains of art. “Basically what happens is you train the GAN to take in random vectors (lists of numbers), and output portraits,” he explains. Two neural networks, a discriminator and the generator, work in tandem. “The generator comes up with paintings that fool the discriminator, and the discriminator tries to learn how to tell the difference between real paintings from the dataset and fake paintings the generator feeds it.”

Robbie Barret

Though they get better at doing what they’re programmed to do over time, producing in this case more realistic portraits, sometimes they fall into what Barret calls a “local minima”. This means they find a way to keep fooling each other without getting better at the task.

Dazed. April 2018. AI has created some creepy but kind of beautiful nude art. [Online] Available at: http://www.dazeddigital.com/science-tech/article/39682/1/artificial-intelligence-ai-has-created-some-creepy-kind-of-beautiful-nude-art


Then Cain shared this article, which coincidentally links to the above article:

How artists can set up their own neural network. Part 1: Installation

This article is meant to help artists, designers, and other non-technical* people set up a neural network on their computer. Here’s an article where I introduce the idea of neural networks and how they can be used by artists.

Jackalope. April 2018. How artists can set up their own neural network. Part 1: Installation [Online] Available at: https://www.jackalope.tech/how-artists-can-set-up-their-own-neural-network-part-1-installation/


I then explored the link given in the above article:

How artists can use neural networks to make art.

This is the first in a series of articles explaining how artists can use neural networks like DeepStyle to make art.

Let’s start with the basics. What are neural networks? Neural networks are a computational approach to solving problems that is modeled off the structure of the human brain. Rather than taking an input and giving a single output, (x+y=z) they take in a large set of inputs and run those inputs through a large set of nodes/neurons that transform those inputs and give a single output (a,b,c,d, e, is output as “Cabde). You throw a bunch of data through the nodes and you’re not going to get a very coherent output. This is how neural networks are different than traditional computing. They learn from their mistakes. You throw in data, you get out junk. You tell the computer that it’s output is warmer or colder, and then it tries again. It does this millions of times until it gets a balance of nodes that gives a rough approximation of a “correct”answer. This is super useful for the kinds of tasks that humans have usually excelled out while computers have not. Things like categorization, object recognition, speech recognition, or more “intuitive” guesswork.

Jackalope. March 2017. How artists can use neural networks to make art. [Online] Available at: https://www.jackalope.tech/how-artists-can-use-neural-networks-to-make-art/

Although this seems like a really interesting idea to explore, I don’t think I’ll be able to learn about neural networks within the given time frame.


Chinese artists bring VR works to Art Basel Hong Kong

The tech boom has not only given artists plenty to critique, but a broader range of tools for creative expression, with virtual and augmented reality being widely embraced. A number of VR and AR installations are on show at this year’s Art Basel Hong Kong, from big names such as Marina Abramović and Anish Kapoor in the HTC Vive Lounge to lesser-known artists in the Discoveries sector. As the scope of these works suggests, this is more than a gimmick.

Financial Times. March 2018. Chinese artists bring VR works to Art Basel Hong Kong [Online] Available at: https://www.ft.com/content/aafbc914-2b8e-11e8-9b4b-bc4b9f08f381


Virtual reality: Marina Abramović and Anish Kapoor at Art Basel Hong Kong

Financial Times. March 2018.  [Online] Available at: https://www.ft.com/content/7f9d89d2-2c66-11e8-97ec-4bd3494d5f14

Wednesday 18th April 2018


Recommends getting a 3-button mouse

Files -> Build Settings -> Player Settings … -> (in Inspector) XR Settings -> Vuforia Augmented Reality (at bottom)

Alt + Left Mouse = Move around

Alt + Command = Move scene around

Mouse wheel = Zoom

(in Hirarchy) Right click -> Vuforia -> AR camera -> Delete main camera

Right click -> Vuforia -> Image

Double click object to zoom into it

Select AR camera -> (in Inspector) Open Vuforia configuration -> Go to License Manager in Vuforia webpage -> Copy big chunk of text -> Paste it in ‘App License Key’ box

If your webcam pops up you’ve probably forgotten to do this ^

Select target database on Vuforia, tick all images and then download database and import it into Unity project (box to import should pop up)

Select image target -> (in Inpector) Image Target Behaviour-> Select own database

When entering the width on Vuforia when uploading the images to the database on Vuforia it effects the scale of the image in Unity

Get 3D model

ARCamera -> (Inspector) Database -> tick box

Remove animations from objects

Window -> Animations -> Input keyframes

Create plane -> Put video on plane

Shortcuts for editing tools -> QWERTY

To publish:

Build settings -> iOS -> Build settings -> Other settings -> ‘Bundle Identifier’ is unique like com.gsa.workshop – if you use the same Bundle ID it’ll write over the previous version, so a good format is com.’institution’.’project name’-> ‘Camera description’ 9.0 -> Build


Use ‘xcode’ for Apple laptops

For free 3D models:




AGI Soft Photoscan

http://www.metainteractive.co.uk/ – Photo grammetry

https://www.capturingreality.com/ – Photo grammetry

Asset store -> Joystick pack

Friday 20th April 2018


To help me with Scripting in Unity – something I personally struggled with in Getting into Unity:


Saturday 21st April 2018


I have some ideas in mind:

  • ‘Behind Closed Doors’
  • A treasure hunt type idea
  • Shops
  • Children’s book
  • Weather

Behind Closed Doors

I thought that I could use logos and use the camera to reveal their truth.


Children’s book

For this I came up with the idea of users pointing their device’s camera at pages of a children’s book and having a 3D animation appear to stand on the page and act out whats stated within the text. This would provide an added experience to their book without taking away from the authentic object, just like Brendan Dawes’ Plastic Player, 2016 in which he attempts to recreate a traditional concept through new media:

I also thought that this approach would allow children which a form of sensory impediment, such as vision, to still interact with books and maybe view them in a new light. They will be able to view the story from whatever angle they wish, and can zoom in however far they want.

I thought about doing this with normal fictional books without images already but I thought of a few upsides to doing children’s books:

  • They are only a couple of pages, as opposed to hundreds, so in the space of this project (2 weeks left) I think I should be able to tackle the majority, if not the whole, of the book.
  • Children’s pages are usually made up of only a couple of sentences, sometimes just one, and so there won’t be that much of a story line to animate.
  • I’m not a professional at 3D modelling but with children’s book the models won’t need to be photo-realistic so that’s fine.
  • The materials/texturing also won’t need to be photo-realistic.

Something I will need to consider is a child friendly interface for my app. It will need to be easy to use and navigate. Obviously the app will be able to be used by older users that may be accompanying the child too.

For this project I will use one, short book with simple characters so I am able to translate them into a 3D model (or download the equivalent). If this app were ever to be used within the general public then I would be able to add more books to the collection. A system of selecting or scanning the book would need to be put in place. The nice thing I find about this idea is that I don’t need to create new books in order to do this, it can be used with old books too. It also doesn’t take away from the book either, it still functions as a book without the app.

One could argue that doing this will take away from the imagination of the child, or the illustrator’s art, but this app won’t be replacing anything or removing the authenticity of the book. Just as turning a book into a film is turning it into a new format, it doesn’t make the book cease to exist. Users are able to experience the story on numerous platforms (words, drawings, films, games) and this app will just be another variation.

I searched the internet for images that I could use for my proposal document and as inspiration for my style:

This slideshow requires JavaScript.

2D Prototypes

Prototype 1Prototype 2.2

I created these with Adobe Photoshop so that I and others could see what my final outcome might look like.

My Proposal

Information I hope people get from my proposal:

  • It’s an application for mobile devices, such as smart phones and tablets
  • It’s augmented reality
  • An interaction will be involved
  • The instructions on how to download and use the app
  • What it will look like


I chose to create a paper effect which replicates that of a book. I thought this was very appropriate for the idea that I have. I also chose a loose handwriting font for the header to make it look very relaxed and appealing to a relaxed audience. The text is very simple and straight forward to read so readers know exactly the function of the app and how they’d hypothetically use it.

Monday 23rd April 2018


Feedback from Inga and Jen:

Need to investigate the narrative

Look at how illustrators translate words into images

Look at how directors translate scripts into visuals

Child using the app is enough interaction – no need to extend it into touching the screen

Don’t let it just be a replica of what’s already on the page, otherwise it will just be a gimmick

What can AR add to the experience?

How can I enhance the child’s engagement with the story?

Look at the structure of the story

Doesn’t have to be just a person speak the text for the audio. could be sound effects that add to the story

Literal or abstract?

Don’t need to have a full resolved artefact, have some working pages and then do a storyboard to explain the concept for the rest

I need to set some rules and then be true to it

Need to explore ways of visualising a narrative

Don’t just replicate the book

Interviewing children will challenge ethics


I had not considered the narrative of the story at all

I’ll need to take some time to research the process of turning words into visuals whilst still remaining true to the authors intentions

I also might have to interview the mother of children who use my app

Ankita suggested that I could write a story and print it

Wednesday 25th April 2018

Research on Narrative’s

Some links I looked at:


An interesting link I came across was How to Illustrate a Children’s Book on wikiHow and it stated that the first step was to:

Obtain and study the writer’s brief. If you are contracted to illustrate a book, oftentimes writers will provide you with a brief–a list of notes suggesting the main actions in each spread of the book. Study this carefully, and try to remain faithful to the author’s intentions. If you are illustrating your own book, you have unlimited creative license!

This links in closely to Inga’s feedback that I need to be true to the story line. ‘Remaining faithful to the author’s intentions’ whilst implementing my own creative idea is going to be something I’ll need to consider and be careful with when planning out what I am going to animate. Step two is:

Tailor images based on reading level. Different age ranges of readers require different kinds of illustrations. If you are writing for very young children, each major plot movement may need to be portrayed in your illustrations in obvious and easy-to-follow ways. Slightly older readers who can read most or all of the story themselves, however, may only require illustrations that portray central themes and moments in a chapter.

Now I imagine I am going to select a book with images in it since that’s what most children’s book have. I had identified that I wanted to engage with a younger audience but I hadn’t thought about a specific age group. I will need to think about if I will need to try and show everything so that younger children are able to understand.

The aesthetic of my animations will also play a huge part in which the child will engage with the story line. Just as an illustrator puts their own spin on their illustrations, these 3D models and animations will have a sense of character that is influenced separately from the author.

I’ll then need to focus on the 3D characters themselves. You can only communicate so much through a couple of sentences so trying to translate that into a continuously moving animation I’ll need to consider some things:

Focus on character development, exploring a range of potential expressions, postures, and moods for each character you intend to illustrate. You can use these as references throughout the whole illustration process.

Work with the text. Your illustrations should seamlessly follow the plot of the book as printed on each page. Try to capture details portrayed in the story, and look for ways to subtly foreshadow events in coming pages with your images.

I’ll need to think of reading the book as a child for the first time. I can’t reveal anything that has yet to come later on in the book but that doesn’t mean I can’t foreshadow future events. I also don’t want this to make it more difficult for the child to read – it’s suppose to make it easier and more engaging:

If characters are hard for children to identify across multiple spreads, they may struggle to follow the plot of the book.

 I will have failed if adding this extra feature confuses the child and makes it more difficult to read and understand.

I then had a look at How to illustrate a children’s book and something them mentioned was:

When I illustrate animal characters I still think of them as young children when I think of their expressions and actions.

This is interesting as I didn’t think about having to portray the age of the characters within the story.

As well as providing me with the story, a publisher often produces a written brief, suggesting what they and the author think should happen on each spread. Some texts are self-explanatory but often it is quite difficult to understand the story just by reading the text, because the illustrations also help to tell the story.

This suggests that the added context that goes along with the text should fill in any gaps that may confuse the reader. If something isn’t necessarily clear within the text maybe the animations can clarify it, or hint towards that direction so that the child can use their imagination.

Consider your audience. The first rough sketch is the most taxing, as there are so many things to consider: the composition, the setting and location, the pose and expression of the characters, the props, and so on. I like to start by thinking about the characters’ environment, and also if there are any other minor characters – in this case, all the woodland creatures.

Consider your audience, too. With the details, I try to find a balance between them being authentic to the period but also relevant and familiar to children today.

This reiterates what was suggested on the other website that I need to consider EVERYTHING, not just the character themselves. The point about making it relevant and familiar to children of today is interesting because I’m going to have to animate it whilst remaining true to the story line but also so it’s relevant in the context of today.

Continuity is probably one of the biggest challenges when illustrating a picture book. One of the trickiest aspects is ensuring that a character’s features or proportions remain the same from any angle.

Keeping a character’s features or proportions the same shouldn’t be a problem as I’ll be using the same 3D model for each page, but something I will need to keep in mind is the animations. The animations I do will create a certain personality/character and this character is what will need to be continuous throughout the whole book.

When I illustrated Judy Hindley’s Mummy Did You Miss Me? the entire story took place in a garden, so I had to draw a plan to work out how the character would navigate the garden in such a way that the background detail would be accurate from every angle. You may find that you get a composition working with the text on one spread, but then a certain detail won’t work on the next spread – it is very much like a puzzle.

This is interesting because I will be basing my 3D environment off the 2D illustrations on the page, but that is where the problem lies. When translating it into 3D there are many details that will need to be added that weren’t originally there. I then also might end up encountering the problem that I’ll need to experiment with different layouts and environments until I find one that works with the whole book. If I create a forest and then it’s revealed later on that there’s also a house nearby, that will need to be included at the beginning of the 3D version because you can’t reveal something like you can in 2D.

I was at ISO Design yesterday and a group of us were talking about how we engaged more with an education video that had a personal touch to it. An example would be to include a person’s own story whilst developing a product, and for some reason you match the person to the product and so it has a longer lasting effect. I want to replicate the same idea with my animation. I want the animation to add a deeper connection that the children can engage with and so hopefully the story has a positive effect.

Another thought is that children are turning away from books now and leaning more towards technology. Hopefully this app bridges that gap and creates a nice middle ground where children can still experience the authenticity of books but also experiment with technology.

Creating these models is transforming the illustrations into a 3D form, but one could argue that I am also intending to add a fourth dimension – time. This gives me a whole new playing field to experiment with.

I know I mentioned that I could play audio that speaks out the text written on the page but I will also need to consider whether I bring the text into the animation too. An alternative would be to place the characters strategically around the page so that they don’t cover the text so that children can read the text whilst seeing the animations.

Something else that has come to mind is the transition between pages. For the approach I’m taking each page is a trigger (whether it’s single spread or double spread I’m unsure of yet – depends on the book) and that means you need to keep directing your phone at a new page to trigger the next scene. Whilst this seems fine to me, any delay, glitching or moments where it fails to work might ruin the flow and make the child lose interest.

Another website How to Illustrate a Children’s Book says:

It seems obvious to say this, but a good illustrator brings the story alive, adding something to the finished product. Without the story the artwork could be meaningless. It has no frame of reference on its own, nothing linking it to a tale or event.

I need to ask myself this question, what is it I’m adding to the finish product?

Children's Book Formats

FeltMagnet, 2016, How to Illustrate a Children’s Book [Online] Available at: https://feltmagnet.com/drawing/How-to-Illustrate-a-Childrens-Book [Last accessed 25/04/18]

Generally speaking, the younger the child, the more pictures will be required, starting with one picture on every page. Easy readers may include pictures on every page to aid comprehension, while middle grade books often have few illustrations, if any.

I thought that this table would be a good way of identifying what target audience to aim for and therefore what style my 3D environments, characters and animations will take.

I thought I’d will go with the toddler section (1-3) as this allows me to experiment with simple graphic elements but also allows me to aid their learning process. I went to a local charity shop and had a look at their books and came across ‘A Day With Patch’ by Peter Curry. I had to scan in all of the pages so I could use them as image targets in Unity:

This slideshow requires JavaScript.

I scanned them all in at 600 dpi in order to tackle the point I mentioned earlier – I don’t want any delay between images as this will mess with the reality of the illusion and also might make the toddler become disinterested. I then upped the contrast and brightness a little so that they match the images in the book as much as they can. Unfortunately scanning them in at such a high resolution left me with images between 50mb and 60mb, whereas Vuforia only allows me to upload images up to 2mb. I had to take each one into Adobe Photoshop and reduce the quality to at least 10% for each one, but luckily they all were 5 star ratings on Vuforia anyway.

I did a test with a random dog model with an animation from Mixamo, and even though it distorted like crazy the idea is still there:


The task is to interview someone from your target market but because my target audience is for toddlers that makes it very difficult. Therefore I decided to talk people I knew who interact with toddlers on a day-to-day basis. I spoke to a friend who works at numerous nursery’s with children and also my twin who is a mum. I explained the concept of the app and this is some of the feedback I got:

  • I could add in a voice over reading out the text. This would be a helpful feature for children who struggle or who are learning to read. I could even have different accents – or languages!
  • Would children would be able to interact with the 3D characters? This is a really good idea but not something I could achieve within the time span left.
  • Use bright colours to help engage the children.
  • Is it going to be cartoon-ish or photorealistic? Cartoon-ish.
  • How similar is it going to be to the book? I’m going to try and keep it as close to the aesthetic of the book.


This link could be handy for any audio clips I need:


Thursday 26th April 2018


Glasgow International VR Talk

I came across a talk that was for  Glasgow International and had to sign up on Eventbrite. I went with a couple of people from first year and I found the talk very interesting. I also thought it tied in well with this project because this is the type of idea we’re experimenting with along with AR and AV.

Reflection on the noumenon of Virtual Reality Art

by The School of Simulation and Visualisation

3:30 PM to 4:30 PM


Dr Richard Wang

Associate Professor, Tsinghua University
Deputy Director, Department of Information Arts and Design
Director, Faculty of New Media and Performing Arts

Against the wider background of the right to speak occupied by consumption of culture and capital, the upsurge of virtual reality seems to fade gradually, Our understanding and expectations of it gradually become more rational. However, enthusiasm for virtual reality has inspired the imagination of artists around the world and created rich possibilities. On the one hand, virtual reality brings real cultural heritage into the virtual world. On the other hand, it brings a variety of unprecedented digital creative experiences to the audience. Based on the recognition of the media properties of virtual reality, this presentation will consider whether virtual reality has an independent artistic value. Especially from the Chinese perspective,where virtual reality offers a new way to re-establish the connection between traditional culture and current values. We are re-examining “virtual,” “reality,” and the culture we continue to create and occupy.

Received Feedback

Today I received a message from Jen:

Below are my notes from the review on Monday just so you have a record of the informal feedback.

“The idea of a physical book being the tangible interactive item enhanced by AR is interesting.
What new experience is brought to someone interacting with the book? Enhanced existing experience. Think about a specific book and what could be added, changed, subtracted to enhance this experience. This idea had lots of potential but try to narrow the scope in terms of changeable results, narrative changes, who uses it, what the book is. Research towards narrative as oppose to just the technical aspects of AR apps for books. Think about what AR can bring to this scene that a mobile phone camera, app or website cannot normally.”

Friday 27th April 2018

Chat with Leon


Maybe include an interactive feature so children can tap 3D objects to trigger something – I would need to use ray casting and Leon said he would help me with that

Choose a page or two that has the most going on and focus on that instead of the whole book

Could do things like tap the bird to play an audio clip of bird song


How to make Vuforia’s star rating higher:


3D model of the dog I’m going to use:


Swing set:


Saturday 28th April 2018

3D Models

List of things I might need for numerous scenes:

  • Patch
  • Eyes
  • Material
  • Toys
  • Other characters etc ladybird



Identify what will be in each room


I’ve managed to get a good starting point for my project. I currently have a swing set that I got for free online along with Patch and I’ve started to assemble them on my image target. I managed to animate the swing myself with Unity but I might need to tweak it in order for it to look more realistic:

Tuesday 1st May 2018

Creating a Material/Texture for Patch

I’m honestly having such a tough time trying to make Patch look less creepy as a ‘Default Material’ dog I’m having to look into UV mapping and unwrapping and it’s honestly so much to take in just before the deadline. I won’t be able to recreate the clothes Patch is wearing in the book because I’m not that skilled so I’m going to select the colours from his skin and give him a pattern that makes him look somewhat realistic but still links to the character in the book. I simply want to give him a black nose and an orange head that stops as a peak above his eyes to reflect the that in the book.

I just can’t seem to manage creating a material in 3DS Max and then moving it into Unity to put on my animated model of Patch on the swing. I’m unsure if it’s a format thing, a scale thing, because he’s animated, because I’ve sort of included him with another object … just no idea so I’m trying everything. I even tried to upload Patch to Mixamo WITH a texture so that I could export him with it but for some reason it won’t upload with him even though the option to do so is there.

I created a material with a drawn-on smiley face on the stomach so I could see if it was a scaling issue, but when I import it into Unity and place it on Patch he simply just turns yellow and ignores the detail:


I finally managed to do it!!

*insert screenshot*

Thursday 3rd May 2018

Audio and UI


Today I thought I would add a user interface so that the users can navigate the app easily. I created a background that I could use for my Canvas that the buttons would sit on. I found some cool effects that made it look a bit like paper which I thought was cool because it tied in nicely with the authentic idea of the physical book itself:

Background Test

I decided to have only a few button for now so it was easy to use:

  • Start
  • Information
  • Go Back

The simple instruction: “Simply aim your phone camera at the page of a book and watch it come to life!”

This might not make sense to others but it was a way of my visualising which buttons I’d want on screen at what point, so that I could then translate that into scripting:



I then made some finishing touches to the scene like by adding a terrain. I wanted there to be discolouration below the swing seats themselves so that it looked like there had bee natural wear and tear of the grass from children’s feet scraping along the ground.

*screen shots of Unity*


Friday 4th May 2018

Building the App

I came into the studio early to work with Callum and Leon to build my app and put it on Callum’s phone. We decided to do it this way because it would apparently be very difficult with my Windows laptop and Apple iPhone, and Callum has the opposite with an Apple laptop and an android phone. We first tried to transfer my Unity project to Callum via WeTransfer but upon receiving them it wouldn’t recognise the files for Unity. We then borrowed a USB and that still resulted in the same problem. We decided that I would build the app on my laptop for android and then WeTransfer it to Callum’s phone. Of course, this was no easy task …

Three hours later of downloading numerous software, moving files, renaming files, locating file paths and restarting Unity we finally built it. For some reason I had to download older parts of the software I’d downloaded in order for it to work – I’d never have known if Leon hadn’t googled it and found the page below:


Also the JDK apparently required version 8 instead of the newest, 10, that I’d downloaded – who knew?

Anyway, we did it! I then transferred it to Callum’s phone and he managed to successfully open it, use the interface and explore Patch on the swing.


There were a couple of problems that I noticed after building it but nothing that can’t be fixed:

I’d originally created the interface with the resolution 1080 x 1920 in mind, and I selected an option to scale it depending on the screen size. I’d selected the option ‘Width/Height’ and as you’ll see in the final documentation it scaled to the width of Callum’s screen so the illusion of the main screen is defeated because you can see the live camera feedback in the gaps at the top and bottom. I can easily resolve this by selecting just ‘Height’ next time – it’ll crop the sides a bit but I prefer this as it won’t ruin the illusion.

The other problem I noted is that for some reason when you first select ‘Start’ nothing happens, and then it works on the second tap. I am not too sure why at this moment but I imagine it’s something to do with the scripting or linking it up in Unity which I will investigate at a later date.

The terrain glitches about and doesn’t want to leave the screen even when the camera isn’t pointing at the image target. No idea at all why this is but I can also investigate this at a later date.


Callum then managed to do a screen recording of his phone to show what users would see when they interact with the app. I was actually really pleased with the result.

Final presentation video:

It does actually have audio but it was taken out of this screen recording because Callum and I were talking in the background.

Class feedback:

Patch is cute

It would be cool to monitor children’s learning process so that every time they revisit the book then it updates

Inga knows a woman that studies children’s learning and said that she would be interested in this kind of idea

Leon said that there’s a lot of money to be made from children’s book

Feedback from Inga:

Successful working prototype showing a page of a children’s book. You took onboard the advice about focusing on the development of the visual narrative. You successfully adapted the 3d character and environment models and animations from libraries to create a cohesive and engaging visual outcome that supported the narrative. Use of ray tracing could have enhanced the engagement further by allowing more interaction.

Next steps:

If I were to carry on this idea in my spare time I’d need to consider a lot of things:

  • I’d need to model my own 3D objects and learn more about Autodesk Maya and 3D Max
  • Get more comfortable with Unity
  • Learn ‘ray tracing’
  • Learn scripting
  • Contact the authors of books I’d like to work with and get their permission and get them in on the process
  • Learn about apps
  • Work on the interface
  • Input audio


I was on FaceTime once back home with Beth, my twin sister, and I decided to show her my final documentation of my app. Sam, her son, was in the background and became interested with what was on screen. He has grown up with technology and is comfortable with the idea with phones, so much so now that he automatically sticks his tongue out when facing a phone expecting to see Snapchat react to his facial expression and display a filter over his own face. Sam came up to Beth’s phone and started tapping on the screen, pointing to Patch and saying “Dog” and tried to impersonate a dog by going “woof woof”.

I thought this was very interesting because this was just the documentation of the app, so there was no actual interaction there. This made me think about what would happen if I made it so things happened when children did tap the screen, because this would allow for another element of interaction as well as holding the phone to the book and moving around and zooming in and out.

This will be something I’ll explore in the future.

Next: Coming Soon!

Previous: Typographic



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: