Design Domain (Part 2)

Deliverables

This brief is an opportunity to explore new methods and media. A wide range of responses to the brief are anticipated. These could be in the form of photographs, drawings, paintings, prints, models/maquettes, text, film and digital, sound-works, etc. Digital processes should be a key component in the production of the artefact – however the final outcome needn’t exist as digital media per se.

  • A resolved artefact and portfolio, based around the key theme and informed by your research based on your Stage 1 proposal. If the artefact is practically manageable and available, please use it within your presentation.
  • A 100 word positioning statement, including a title, description/materials and rationale.
  • Documentation of research and development, including drawings, paintings, photography and other physical work.
  • Learning Journal.
  • Sketchbook work.

Additionally, we would like to see video documentation of your final piece, up to 2 minutes in length. This video should tell the story of the work from concept to realisation. This may be submitted by the end of the following week (Friday 9th March).


Learning Outcomes

  • By the end of this course, students will be able to:
  • Demonstrate knowledge of the scope of design as expressed via its main theories, concepts and principles within both specialist contexts and the broader design domain
  • Demonstrate an awareness of the importance of research
  • Apply knowledge, skills and understanding within the context of set project(s) and using some advanced professional skills
  • Undertake critical analysis of design theories, concepts, processes and practice
  • Present complex arguments, information and ideas relevant to the practice of design in a structured, coherent form, using a range of communication methods, to a range of audiences
  • Use standard IT applications in the research development and presentation of design project work
  • Exercise initiative and independence when carrying out project work
  • Take account of own and others’ roles and responsibilities when carrying out and evaluating tasks
  • Work, under guidance, with others to acquire an advanced understanding of current design thinking and practice

Assessment Criteria

You will meet the outcomes/objectives listed above through a combination of the following activities in this course:

  • A studio based research- and practice-led project exploring specific theme(s) surrounding the broad design domain, relative to design subject specialism, supported by seminars, presentations, workshops etc., underpinned by self- directed study.
  • Completion of the formative and summative assessment.

Tuesday 13th February 2018

Click here to see Design Domain Part 1

Getting Back Into It

This is the PDF that I created for Design Domain (Part 1):

DesignDomainPart1_ElliotShaw

Project description:

To virtually recreate the chromatophores of cephalopods in order to create a second augmented skin that effectively, visually communicates emotion in an expressive manner.

Things that I needed to research further:

  • Facial expressions that express emotions – what parts of the face change

Thursday 15th February 2018

Facial Recognition

I did some initial googling to investigate facial recognition:

https://facedetection.com/software/

https://www.betafaceapi.com/wpa/

https://www.faceplusplus.com/

http://blog.dlib.net/2014/02/dlib-186-released-make-your-own-object.html

https://github.com/shiffman/Face-It

Chat with Calum

We spoke about how we could maybe produce the ‘octopus’ effect on Processing.

We also spoke about how cephalopods see a wider colour spectrum than human beings and how this might effect my work.


Friday 16th February 2018

https://www.faceplusplus.com/attributes/#demo 

https://console.faceplusplus.com/documents/6329584

https://console.faceplusplus.com/app/apikey/view/10681

Some notes I took with my chat with Cordelia:

Use questions

Use images first and then processing dots

Benedict Drew – the trickle down effect

The formless

Emotion isn’t really immesurable

Cannot be efficiently described

Trying to describe the indescribable

Representing an abstract concept – visually

Expressive portrait

Leave a gap so the viewer can fill it in themselves

Not a scientific tool

I like the fact that she reassured me that my end product doesn’t need to be a flashy, technical device – it’s suppose to be artistic and expressive. This relieved some stress I had of getting the details completely correct and has allowed me to explore my idea with wider boundaries.


Monday 19th February 2018

Processing Inspiration

https://www.openprocessing.org/sketch/492096

https://www.openprocessing.org/sketch/49462

https://www.openprocessing.org/sketch/49523

https://processing.org/reference/text_.html


Tuesday 20th February 2018

Links

Still trying to do some research on facial recognition:

https://github.com/atduskgreg/opencv-processing

https://github.com/shiffman/Face-It

https://np1402.wordpress.com/

API’s in Processing?

 


Wednesday 21st February 2018

More Links

https://www.ecosia.org/search?q=microsoft+azure&addon=chrome&addonversion=2.0.3

https://www.ecosia.org/search?q=face+osc&addon=chrome&addonversion=2.0.3


Thursday 22nd February 2018

Yet More Links!

http://www.wekinator.org/examples/#VideoWebcam

http://openframeworks.cc/http://openframeworks.cc/http://openframeworks.cc/

http://facetracker.net/


Sunday 1st April 2018

Selecting a Method of Facial Recognition

I have looked at and explored many variations of facial recognition and I need to single it down into one process that will effectively allow me to recognise facial features. I need to think about whether I want to find a piece of software/coding recognise an emotion as a whole, or one that recognises individual facial features so I can work out which expressions correlate to which emotions.

*insert drawing diagram of process here*

A random video I came across about using emotional data. Might come in handy for the future:

I thought the best way to choose what approach I will take is to watch videos of it in action, since I want this to be a real-time project. From the videos I will investigate their approaches and see what best fits my idea.

Real-Time Facial Emotion Recognition with Convolutional Neural Nets

This idea reads the expression on the man’s face and translates it into an emoji. I could use this idea but manipulate it to change the emoji into my own idea. The link in the description is https://github.com/sushant3095/RealtimeFacialEmotionRecognition

Others

https://jeremykun.com/2011/07/27/eigenfaces/

“We are on a quest to write a program which recognizes images of faces. The general algorithm should be as follows.

  1. Get a bunch of sample images of people we want to recognize.
  2. Train our recognition algorithm on those samples.
  3. Classify new images of people from the sample images.

We will eventually end up with a mathematical object called an eigenface. In short, an eigenface measures variability within a set of images, and we will use them to classify new faces in terms of the ones we’ve already seen. But before we get there, let us investigate the different mathematical representations of images.”

I decided to investigate further the Real-Time Facial Emotion Recognition with Convolutional Neural Nets which led me to download the github file that was provided from the link in the video description.

# About:
Takes pictures or webcam video as input. It detects all faces in each frame, and then
classifies which emotion each face is expressing. Then replaces each face with an emoji corresponding to that emotion.
Recognized emotions: Neutral, Happy, Sad, Angry, and Surprise.

Training accuracy was 91% and test accuracy was 75%, with the following requirements:
- User's facial expression must be strong / exaggerated
- Adequate Lighting (no shadows on face)
- Camera is at eye level or slightly above eye level

It was also stated that I would require:

- Python 3
- Packages for Caffe and OpenCV
- Webcam

Obviously I already knew that I’d require a webcam for a live feed. My laptop already has an in-built webcam so that should suffice for now unless I feel the need to buy an external one. As for Python, I’m not very strong in this area and so I think I had an outdated version because when I attempted to open a Python file, Python would open but then immediately close itself. I went onto https://www.python.org/downloads/ and downloaded Python 3.6.5. Upon completion of the download it directed me to https://docs.python.org/3.6/tutorial/index.html in case I needed help – which I most likely will. I also went to https://github.com/BVLC/caffe to download Caffe as stated that I would need.

Even after completion the problem is still persisting. I tried to open Python 3 by itself and it’s still happening so I’m not sure what’s happening there. Upon a bit of googling a webpage suggested to download PyCharm and so I thought I’d give it a go. Some handy links:

It doesn’t seem to be able to interpret the coding.

No Python Interpreter

I’m going to try something else whilst I’m stuck with this attempt.

This was a really good video as is was easy to follow but I don’t think I want to delve into Python right now as I don’t have much time to learn how to code an entire new language – I’ve only just got to grips with Processing!

I’m procrastinating so much because I’m so lost. I don’t NEED to use an IP webcam – it does look cool though:

A New Approach

I’ve been caught up in all this new software and I’m getting nowhere and so I’ve decided to take a step back and look at it with fresh eyes. Breaking my project down, I need to:

  • Get my webcam working live in Processing
  • Work out a way to identify emotion
  • Translate the identified emotion into a pattern/colour which will be completely abstract but somewhat resemble the skin of an octopus
  • Work out an interface with instructions so users know what to do
  • Decide on how I’m going to output the pattern, either on a screen or via a projector

Whilst I currently can’t solve the facial recognition part I thought I’d do Cordelia’s suggestion and create a version where I created a keyPressed version so that users can still interact with it. It just requires them to press a key on the keyboard that coincides with the emotion they feel that they’re currently experiencing.

Mixing it with a pre-existing sketch:

You can’t tell in these screenshots but the lines of colour are actually pulsing down and somewhat copy the effect of the waves of colour that cephalopods use when communicating. The X-axis of the mouse also controls the size of the ellipses that show the colour. It’s a very simple sketch and was only made to be an experiment but I actually really like the result.

I experimented a bit and changed the fill opacity in the push matrix and the outcome changed dramatically:

Test_2.1_S

It only ‘drew’ colour when I had the key pressed and didn’t replace the black background on top and so I was able to overlap the different coloured ellipses.


Tuesday 3rd April 2018

Processing and API’s

After completing working out the process, I decided to have a look at the facial recognition part again. One of the links I explored earlier was Face++. This analyses peoples faces and then outputs lots of information about them such as age, gender, but more importantly for me, emotion. With this identified emotion I can ‘simply’ link it with a corresponding Processing sketch/colour palette. This website works through using API’s, which I have very little experience with, but I’m willing to learn if it will give me the desired outcome for my project. I googled to see if anyone else had attempted to use API’s within Processing but I only came across pages (1) (2) of people who were stuck at the same hurdle. The first query is also three years old and all the links that looked helpful no longer work – great.

Page 2 directed me to a data tutorial by Daniel Shiffman and that talks about how to grab data from various sources and input them into your sketch. I feel like this’ll help me because my mind is currently all over the place and Shiffman has a great way of explaining things:

“A means for doing this is an API or “application programming interface”: a means by which two computer programs can talk to each other.”

“The ease of using a Processing library is dependent on the existence of clear documentation and examples. But in just about all cases, if you can find your data in a format designed for a computer (spreadsheets, XML, JSON, etc.), you’ll be able to save some time in the day for a nice walk outside.”

“One other note worth a mention about working with data. When developing an application that involves a data source, such as a data visualization, it’s sometimes useful to develop with “dummy” or “fake” data. You don’t want to be debugging your data retrieval process at the same time as solving problems related to algorithms for drawing. In keeping with my one-step-at-a-time mantra, once the meat of the program is completed with dummy data, you can then focus solely on how to retrieve the actual data from the real source. You can always use random or hard-coded numbers into your code when you’re experimenting with a visual idea and connect the real data later.”

Yeeeeeey! The information I’ve been looking for all this time!

“Once you have that key, you can store it in your code as a string.

// This is not a real key
String apiKey = "40e2es0b3ca44563f9c62aeded4431dc:12:51913116";

You also need to know what the URL is for the API itself. This information is documented for you on the developer site, but here it is for simplicity:

String url = "http://api.nytimes.com/svc/search/v2/articlesearch.json";

Finally, you have to tell the API what it is you are looking for. This is done with a “query string,” a sequence of name value pairs describing the parameters of the query joined with an ampersand. This functions similarly to how you pass arguments to a function in Processing. If you wanted to search for the term “processing” from a search()function you might say:

search("processing");

Here, the API acts as the function call, and you send it the arguments via the query string. Here is a simple example asking for a list of the oldest articles that contain the term “processing” (the oldest of which turns out to be May 12th, 1852).

// The name/value pairs that configure the API query are: (q,processing) and (sort,oldest)
String query = "?q=processing&sort=oldest";

This isn’t just guesswork. Figuring out how to put together a query string requires reading through the API’s documentation.  Once you have your query you can join all the pieces together and pass it to loadJSONObject().

For grabbing data from the web, an XML (Extensible Markup Language) or JSON (JavaScript Object Notation) feed will prove to be more reliable and easier to parse. Unlike HTML (which is designed to make content viewable by a human’s eyes) XML and JSON are designed to make content viewable by a computer and facilitate the sharing of data across different systems.

Detect API can detect all the faces within the image. Each detected face gets its face_token, which can be used in follow-up analysis and operations. With a Standard API Key, you can specify a rectangle area within the image to perform face detection.

This is the reference for using the Face++ Detect API: https://console.faceplusplus.com/documents/5679127

This is the reference for using the Face++ Face Analyze API: https://console.faceplusplus.com/documents/6329465


Thursday 12th April 2018

Taking a New Turn

After conversing with my tutor online I realised that I would be unable to complete my desired outcome with the time span that I have left. Therefore I decided to go to my beginning Processing sketches and attempt to create the same idea just without the facial recognition. I still intend to learn this but I will need to implement it at a later stage whilst further developing this project or implement it in another project.

Unfortunately just as I thought I was getting somewhere again I realised something with my processing – that I can’t overlay a moving processing sketch over a live webcam feed because, even though I managed to make the frames from my webcam slightly opaque to reveal what’s underneath, 30 FPS of images and ellipses simply merge together and get lost. It becomes a frame, then a row of ellipses, then a frame, then another row of ellipses and then again but 30 times a second and so undetectable to the human eye.

Test_2.2_S
I’m currently using an avocado as a place-holder for my face.

I’ve managed to make it so the live webcam pauses when you press a certain key, and so I’m thinking I could get the users to press a certain button depending on what emotion they think they’re feeling and the paused frame would reflect that emotion. In the future once I learn facial recognition I’d like to mask that out of the frames and manipulate that but until I learn how to do so I’ll have to apply the effect to the whole frame. Unfortunately in these examples the pulsing ellipse effect that I liked is hidden by the live webcam feed.

Test_2.3_S


Friday 13th April 2018

Starting Again!

It seems very late but I got a new source of inspiration from another of Daniel Shiffman’s tutorials. It reminded me of a video I saw whilst researching during part 1:

I decided to use this new found information and completely redo my project, using some things I’d learnt along the way. I first tried out a ‘spot light’ effect which reminded me of the light some octopi give off. In this example you move your mouse around to reveal what’s underneath, in this my avocado:

I then carried this idea forward of ellipses revealing thing and mixed it with some Shiffman knowledge. The squares react to light and so the darker the pixel it analyses the smaller the square, and the brighter the pixel the larger the square:

Test_5.1_S

I then added the interface that I’d created earlier:

Test_5.2_S

I then edited it so that the colours were continuously randomising between set boundaries so it looked like the chromatophores I was trying to achieve. I also changed the squares into circles, again to resemble the chromatophores:

Test_5.3_S

On such a small scale it hurts my eyes a little bit and so I feel it looks much more satisfying on a larger scale. The text also disappears behind the circles because the circles are continuously updating, bit it is easier to make out when on a larger scale.

I needed to select some colours that would correspond with the six emotions that I’d researched during part 1:

For more than 40 years, Paul Ekman has supported the view that emotions are discrete, measurable, and physiologically distinct. Ekman’s most influential work revolved around the finding that certain emotions appeared to be universally recognized, even in cultures that were preliterate and could not have learned associations for facial expressions through media. Another classic study found that when participants contorted their facial muscles into distinct facial expressions (for example, disgust), they reported subjective and physiological experiences that matched the distinct facial expressions. His research findings led him to classify six emotions as basic: angerdisgustfearhappinesssadness and surprise.

Wikipedia. (2017). Emotion – Basic emotions [Online] Available at: https://en.wikipedia.org/wiki/Emotion#Basic_emotions [Last accessed 14th Nov 2017]

Paul Ekman Group. (2018) Dr. Paul Ekman [Online] Available at:  https://www.paulekman.com/paul-ekman/

And I came across this diagram on an online website that spoke about “graphic designers use of color in both technical and communicative ways.”:

Screen shot 2010-04-11 at 8.24.12 PM

Blogspot. 2010. Introduction to Color [Online] Available at: http://johnbonadies.blogspot.co.uk/2010/04/introduction-to-color.html

I decided to use this as a link to my emotions and colour, although in the future I’d like to explore this a lot further.


Sunday 15th April 2018

Realisation

As stated at the beginning of this blog in ‘Deliverables’ it says “This brief is an opportunity to explore new methods and media.” I feel like that I have indeed done this, it’s just that I have not been able to fully complete the process for my final outcome. I have still been able to experiment with Processing and using the webcam as an input and learn lot’s of new bits of code that I can use in the future. My final result still technically does what I intented it to do in Design Domain Part 1, just not as fancy as I’d originally liked it to. I just hope that it still communicates the same idea.

We were also required to write a positioning statement that would have accompanied the work when it was presented at the Open Studios for all of GSA to see but unfortunately I missed that, but I thought I would write one anyway:

Digitally Augmented Skin, 2018

Elliot Shaw

Interactive Artwork

My artwork is centred around attempting to represent concepts that have no visual representation, specifically tackling the idea of emotions. For this project, I’ve created an interactive installation that allows users to input data which is then calculated to output a visual that digitally represents your emotion.The aim is to virtually recreate the chromatophores of cephalopods in order to create a second augmented skin that effectively, visually communicates emotion in an expressive manner.

I took some wording that I used out of my original PDF because I’d used that wording specifically to sum up my project as a sort of ‘blurb’ before the reader continued.

I need to remember what Cordelia said in that it shouldn’t be used in a scientific approach. It’s suppose to be expressive and so users may not completely agree with the outcome, but I suppose in a way that should be encouraged. Maybe if I do come to present it I should leave a notepad nearby for users to make notes?

I think the positioning for my final piece is important. I chose quite a quirky font because I saw that this project was undertaking quite a childish theme. It no longer became such a fact-based project and moved towards being a fun orientated one that allowed users to interact with it no matter what their true emotion was. I see this installation in a sea world or museum next to some information about octopi and sea life. This would allow users to learn about octopi and also engage in an interactive experience that will hopefully encourage them to remember the experience and the information they just learnt.


Monday 16th April 2018

Documentation

The links below are video clips and audio that I used in my documentation video. I wanted to have the sounds of the gentle waves from the surface of the ocean to set the scene as you don’t really see much of the surrounding water considering most of the shots are close-ups.

Final Documentation Video

A photo of some people from my class using it:

IMG_7805

Steps for the future:

  • Work out how to use API’s in Processing
  • Work out how to do facial recognition
  • Do more colour theory
  • Do user testing
  • Get feedback
  • Make it an installation
  • Let people interact with it
  • Re-do documentation

Part 1: Design Domain

Previous: Getting Into Unity

Next: Typographic

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: