Module III

Today we are starting the last module of this course. Jens introduced the module and the topic we are going to spend the next three weeks working on.

Learning goals

From the course the learning goals are:

  • To understand what is machine learning and what is being used for.
  • To explore the possible usefulness and difficulties of machine learning as a ‘design material’ for interaction design.
  • Learn how to use machine learning in relation of recondition of bodily movements and gestures
  • Get an understanding of machine learning in the broader context of artificial intelligence

Topic

The topic of the course is ‘ machine learning as design material for interaction design with regard to bodily movement and gestures (embodied interactions)’

How to use the body to gesture and how to use the machine to learn and understand these gestures. In module I we worked with sketches that were already ‘trained’ to recognise a body or a movement, while in this module we are going to work on our own models that will try to recognise a body gesture or movement.

This will be based on the data that comes from the data, and when the phone is moved that data will be used as an input data for the sketch. In this course we are going to experience more with the material and what it can do and what not. The module itself is focused on tinkering the material before we actually try to implement our ideas to it.

We are still going to follow the usual design process:

Tinkering > Ideate/Sketching > Implementing > Experience/Reflect

Why this topic?

The practical value of machine learning as a material for interaction design is not very known. In the module we should explore if it will work or not? What difficult it is? What can be that used for? How after all of this can help with understanding the bigger question with artificial intelligence?

How to work with this topic?

Usually when we input something in a program we expect to produce an output. We program the algorithm exactly how to work with data and what to do. While in machine learning we will not do that. Instead of writing an algorithm we give the computer more examples from where the mouse for example is on the left or the right side, the machine after a while will be able to predict by itself whenever the mouse is on the left or the right side.

When we are working with gestures, we are working with limited amount of gestures. Most of computers or even phones might be trained with machine learning about different gestures. That comes out as positive side when we do gestures that are similar to each other but not exactly the same. Exploring new gestures and movements can be much easier to use machine learning than writing an algorithm about it.

For this module, we should brainstorm about what kind of new gestures we want to work with? What body parts are we going to use to create those gestures? How does it feel to attach the material to our bodies and work with those new gestures? Is the speed of the gesture important and does it have big influence?

This topic in general allow us to be as creative as possible since it is full of opportunities that could be explored!

Setting up the framework

To begin working with the machine learning project, today we also set the framework which can be used a base. The script allows us to connect to a server that can be accessed to our phones which can be used to record motions that will be used later to ‘train’ the computer. By collecting data and recording multiple tries from the same motion, will allow the computer to recognise similar motion in the future.

Leave a comment