Final Project – ‘Digitisation’ Iteration 3

‘Digitisation’
By Koh Zhi Kai, Janelle Lim, Norizainum bte Abdul Muin and Nur Amalina Ahmad

In the name of efficiency and connectivity, everything is shifting to our digital platform. It used to be typewriters, and now we have digital documents. We used to send greeting cards, but now we have e-cards. We used to call up our friends, but now we communicate through Facebook. The examples are endless, but they all drive towards digitisation, where everything shifts to digital forms that computer systems can process. Should we herald this phenomenon that is fast growing? Or should we take a step back to consider how such a phenomenon, when fully developed and integrated in our lives will affect the way we live, the way we interact with others and our society at large? What would it be like to live in a world where even information about when you last cut your nails is digitised? Will you be in control? Or would it be…

Iteration 3 – For final submission

For the final submission, our group did some final touches on top of those mentioned in the previous post (under Iteration 2)

Scanning lines (Panel 1 & 2)
We have added blue scanning lines to Panel 1 and 2 so as to further differentiate the Programmer’s perspective and the user’s perspective. It also adds to the futuristic mood.

Adjustment to Advertisement Panel (Panel 3)
We have also shifted the advertisement panel more to the centre so as to obstruct the viewers more and push them to want to shift it. Previously, the advertisement panel was in the left hand bottom corner of Panel 3 and users hardly touch it and hence, did not realise that they can interact with it by moving it around.

Highlighting ‘Take Nori’s Homework’ (Panel 4)
Previously, we observed that users took long, or even never, to click on ‘Take Nori’s Homework’, which we saw as the interaction with the most captivating feedback. As such, we highlighted the button so that it stands out from the rest, in hope to increases the visibility of the button, calling out for it to be clicked.

Turn CCTV on and off (Panel 1)
In order to create an interaction between Panel 1 and 2, we allowed the programmers to be able to switch his/her CCTV view screen (Panel 2) on and off through the code view screen (Panel 1)

Credentials (Panel 1)
In the name of fun, we added the credits in the most discreet place. When users click on the small ‘information icon’ on the right hand top corner Panel 1, the credits will appear.

In this credit screen, we used stop motion video to make ourselves ‘spin’. This further supplement the futuristic mood of the whole project, and portrays the idea of being put under control.

 

 

Advertisement

Final Project – Implementing ‘Digitisation’

 

Iteration 1

In our first iteration, we roughly produced the four panels:

Panel 1: Programmer’s perspective – Code view
Panel 2: Programmer’s perspective – CCTV view
Panel 3: User’s perspective – User first person view with augmented reality
Panel 4: User’s perspective – User interface affording the selection of visible information

For our concept, we wanted to show how the system can control the thoughts and behavior of the user. Therefore, we wanted the system to have several suggestions to the user. Upon selecting and agreeing to the suggestions, the user would act and think as according to the system. This questions our autonomy to think should we be so reliant on digital information. This assumption is not far from reality. As it is, people are already taking the things that they read online as facts. We also wanted to show how the system would usually display wise and positive suggestions. However, when the system gets hacked (interaction in Panel 1), these suggestions will change to display nasty things.

Due to the limitations of time to produce our first iteration, we decided to shoot only one scene to test it out. In this scene, we have Zhi Kai and Nori studying. The scene is supposed to depict Zhi Kai stuck in his homework. As a result, the system would give Zhi Kai suggestions as to how to overcome the roadblock. Such suggestions could be, “Google certain term” and so on. When the system gets hacked, it would instead instigate Zhikai to “Copy/Plagarise Nori’s Homework” In the above, we were only able to shoot the scene where the system suggests to the user to take Nori’s homework. Upon agreeing, by clicking the button Panel 4, the video footage will play in Panel 2 and 4, as if the user himself is following through the action.

Even though our group planned to shoot more video footages to display the choices of actions, as well as the message that we intended, we realised that time may not permit because Zhi Kai would need to programme it from scratch. Also, according to the suggestions of Jing, we decided to work with what we have.

During the critique session, we received feedback that the data displayed were too static and boring. It would be more captivating if we include more dynamic data, as well as more interactions on Panel 3 itself. It was also suggested that we should include more intrusive data. Also, the class had difficulty differentiating the two perspectives that we wanted to portray. Particularly, they were unable to see Panel 2 as a CCTV view screen.

For this iteration, I focused on creating Panel 4, the user interface.

Iteration 2

Using the suggestions given by the class, we made the following improvement for our second iteration:

1. Differentiating the two user’s perspective and programmer’s perpspective

  • Adding an overlay on CCTV screen  (Panel 2)
    An overlay is placed over the CCTV screen with words that simulate actual screens. (eg. ‘Tracking’, or time and date) There are also additional blue lines to create a more futuristic effect.
  • Adding a blueprint on Panel 4
    We wanted to make Panel 3 and Panel 4 look more connected and as if the user sees both these two screens in their vision at one time. We initially thought of taking a photo of the actual scene to place it as background to Panel 4 so that Panel 3 & 4, when seen together would look as though its one person’s vision. However, we realised it would be difficult to get the perfect lighting and angle that matches the video footage in Panel 3. Hence, to make do, we decided to add in the blueprint that acts as a continuation to the vision in Panel 3. I used Adobe illustrator to draw the blueprint.

In the midst of drawing the blueprint

Final Blueprint used at the backdrop of Panel 4

2. More instrusive data

  • In better highlight the issue of privacy, we have also included more intrusive data. Most prominently, we included the category ‘Health’ from Iteration 1.
  • We have also included advertisements that appears as according to the type of information that the user choose to see. The advertisements intrudes the user’s vision and we have programmed it such that it cannot be removed. This is to show how we can be slave to technology, where we exchange our information, and perhaps autonomy for the convenience and benefits of technology. In this case, it would be the use of the information system. It is somewhat like what information marketing giants Google and Facebook are doing; we agree to their terms and conditions in exchange for the use of their service.
  • Here is an overview of the all the data I have prepared for the project (They were written when I put on my nonsensical mind):

3. More dynamic visuals and interactions

  • Instead of having the data plainly appearing when users select a category on Panel 4. Nori, Amalina and Zhi Kai have prepared the mind reader, mood detector and health detector icons that would appear on Panel 3 when you mouse over the subject in Panel 3, and when you roll over the respective categories in Panel 4.
  • The thoughts of the subject were also made to run, showing one at a time, this makes the whole piece more dynamic and interesting.

4. Miscellaneous
We have also added other small elements to refine the whole project and make the visuals more realistic. This includes:

  • Neutron Spinning icon in Panel 1
  • Watermark in Panel 1
  • Category Icon in Panel 3

For this critique session, we have guests from the CNM department to give us some feedback. Specifically, Prof Anne Marie interacted with the piece and commented on the intrusiveness of the data, ‘Isn’t it scary?’ This rightly demonstrate the kind of reaction that our group seeks to achieve with our piece.

Chris mentioned that there was no much interaction between Panel 2 and the rest. Our group attempted to resolve this issue but realised that it is conceptually incorrect for any interaction on Panel 2 to affect the other panels. Fundamentally, Panel 2 is a CCTV screen which programmers used to observe the changes the he has made in the coding screen. Users, whom the CCTV is ‘safeguarding’ obvious cannot change anything on the CCTV, unless he himself moves around and his motion is tracked by the CCTV. Hence, our group left the absence of interaction between the Panel 2 and the rest so as to preserve the concept of our piece.

We have also observed that people interacting with our piece tend very much to click on the square boxes that appears one Panel 2 when there are no clickable interactions programmed for that. In our attempt to fix this problem, we came out with three solutions.

  • Stop motion video of each item
    We wanted a stop motion video of the item spinning and showing its data whenever users click on it. It would appear as if the item is floating and spinning in thin air. However, due to time constraints and inadequate equiptment, we were unable to execute this
  • Ascii art
    We wanted for an ascii art of the selected item to appear in the code view screen (Panel 1) when the item is clicked on the CCTV view (Panel 2). After experimenting with this option, we realised that this would distract users from reading the intrusive data that is meaningful to our concept and art piece. I used an online Ascii Art generator to produced the necessary images and altered them according to the needs and visual feel of our project.

Ascii Art of fire extinguisher

Ascii Art of an iPhone

  • Provide subtle feedback
    Eventually, our group resorted to selectively show some information about the objects when users roll over them, and upon clicking, additional information will be revealed. This solves our problem as it gives users a form of feedback and increases user interactivity in the panel.

Final Project – Conceptualising ‘Digitisation’

Brainstorming: Working real hard!

Still at the initially stages of development, our group explored the idea of ‘teleportation’. We thought it would be interesting to look at the privacy and security concerns that comes with such a technology. How will we guard against trespassing? How would we prevent people from invading our private space – bathrooms? Would technology also fails us in such that we do not get completely teleported to our destination? What would happen to us then? We though the idea would make perfect use of the four mandatory panels, where teleportation would be illustrated by transporting an object/person from one panel to the other panel, with each panel depicting a different scene.

We were excited about the idea, and thought we got it right on. However, after a night of contemplation and further research on the science of teleportation, we realised that it could perhaps gear more to science fiction that science itself. At the moment, progress has been made in quantum teleportation, but not human teleportation. They are many issues with human teleportation that are not addressed or may never be addressed, such as the teleportation of our spirit/soul.

With these issues at hand, we decided to scrap the idea and think of something new. Due to limitations of time, we decided to work on the theme, ‘Digitisation‘. This idea is largely inspired by the video ‘Sight‘. Moving away from the video, we wanted to focus on just the phenomenon of digitisation, where everything is moving digital. Today, we are doing almost all of out tasks online. We are submitting our assignments online, we are chatting with each other online, we are shopping online. And the list goes on. In order to make a critical design out of our project, we decided to push this phenomenon to the extreme. We set the project to be in a day where every single piece of information is digitised. Yes, we mean EVERY piece of information.

We seek to raise the following issues:

  1. Who will manage the large amount of data?
  2. Can people behind the adminstrative system change the digital information that users see?
  3. Will the languge of programming be so common that anyone can access the system that manages the information?
  4. Given the all-knowing capabilities of digital data, can digital information be so integrated into our lives that it influences our thoughts and behavior?
  5. Will privacy of information be preserved?
  6. How will the ease in accessing information about others affect social relationships? (i.e. Our interaction with others)
  7. What are the possible use of large asmount of personal data? Advertising? Insurance?
  8. Are the above situations ethical and comfortable among people?
  9. Who owns the data?

Discussing about the topic brought great excitement, and it didn’t take long for ideas to start flowing in. As inspired from ‘Sight’, we wanted to portray two perspectives: The user’s perspective and the system administrator’s perspective. The users are consumers of digital information while the system administrators are people who manage information. We want to show how the system administrators have power over the information and would not only manage the information that the users see, but can also affect the thoughts and behaviours of users through the manipulation of data.

We envision this concept to be perfectly illustration using via an art installation space, where we will have two rooms, one for the user’s perspective and one for the system administrator’s perspective. In the user’s perspective room, they will be Google glass to view the digital information through augmented vision. With that, they are able to select and view information about their surroundings. In the administrator’s perspective room, they will be faced with two screens to stimulate their control over the data and people viewing them (users in the other room). One screen consists of programmers’ code while the other would show a CCTV screen of the user whom’s visual data he/she is managing.

Yes, we have big dreams. For the purpose of this project, we have to scale our idea down to using 4 panels and Flash. As a results, we split the four panels to portray:

Panel 1: Programmer’s perspective – Code view
Panel 2: Programmer’s perspective – CCTV view
Panel 3: User’s perspective – User first person view with augmented reality
Panel 4: User’s perspective – User interface affording the selection of visible information

We seek to have interactions within and between panels to portray the aforementioned critical issues.

What is ‘When Art meets Science?’

As our group, consisting of Zhi Kai, Amalina and Nori, came together to discuss about the theme. We realised that there was no one common understanding of what the theme really means. Some interpreted it as instances when Science and Art collaborate to produce a product, as in the case of architecture and product engineering. Others saw it as using scientific theories or phenomena as an inspiration for art and last but not least, there is also the interpretation of using Art to display Science.

After doing some research on my own. I realised that there is no one definition for the phrase and no one definition is more right than the other.

Two exhibitions have used this theme:

1. The Bright Beneath: The Luminous Art of Shih Chieh Huang

Artist Shih Chieh Huang creates his work using plastic bags, household objects, computer cooling fans, LED lights, and other assorted materials. This photo is of a 2011 installation at the Smithsonian Institution’s National Museum of Natural History.
CREDIT: David Price / Smithsonian Institution

In this installation piece by Artist Shih Chieh Huang, he was inspired by the science of bioluminescence. His involvement and research in the biology of deep-ocean creatures, and close working relationship with scientists have inspired him to create the above installation which he used lights, computer parts, and plastic tube appendages to create. For more information, visit this page.

2. Art Meets Science Quilt exhibit

Hosted by Studio Art Quilt Associate, this exhibit explores the intersection of what they supposed to be two seemingly different disciplines – Art and Science. The artworks are an expression of the artists’ inspiration from scientific theories or phenomena.

In an interview with art historian Martin Kemp, he mentioned that artists and scientists collaborate to bring about understanding to people. Artists step into the picture, not just exhibiting their creative and artistic flare, but there is a space where they address social issues. Science, through art, may be made more accessible to people who cannot make sense of pure abstract data and scientific facts. This is a pretty interesting interview that is worth a read!

As biotech and other advanced technologies
move out of the laboratory into the marketplace
there is a need now, more than ever,
to explore the cultural, social and ethical implications
of emerging technologies.

– Marcos Cruz and Steve Pike

This is rightly the space where Art steps in. For the purpose of this project, we have approach the theme in the perspective of a critical designer, where we “place new technological developments within imaginary but believable everyday situations that would allow us to debate the implications of different technological futures before they happens.” (Cruz & Pike)

17

Nov

When Art meets Science | Ideation

In this early stage of development, my group did our individual research before coming together to further discuss each idea in greater depth. In my assignment 4, I’ve brought out some inspirations from existing critical designs that also fit the theme ‘When Art meets Science’. Above those mentioned in my previous ideation post for Assignment 4, I’ve also brought up some interesting findings in hope to spark some inspiration among my groupmates.

1. Evolution of Man

This artist depiction of Man in year 3000 is a derison to Man’s reliance on technology today. Longer arms, legs, and fingers, bigger eyes, and smaller brains! Experts predict that we’ll be taller at 6-7ft, our arms and fingers gets longer because we no longer need to reach too far. We’ll have smaller brains since most of our memory work and thinking process is done by our computers. (retrieved from: http://www.country933.com/2012/10/08/we-are-evolving/)

2. Future of Foods
In the future, how would food be like? In the case of 100% synthetic food, what would be the ethical, social, cultural and economic issues attached to it? How will society change? How will the roles in a family change? On top of Matt Brown’s Future of Foods, I have also shared this 3D-printer that aims to afford everyone the ability to ‘print’ fine cuisine.
 

(Retrieved from Dornob.com)

3. Increased online activities and questions of privacy
The video says it all….

4. Augmented reality

This video ‘Sight‘ is a short futuristic film by Eran May-raz and Daniel Lazo. In the video, they explored a world in augmented reality where people constantly uses different applications via augmented vision. As a result, there is gameification of our actions and dynamics of social relationships is put under question. The film also questions possible manipulation of data, that may even affect human behavior, by those who develop the software applications or have power and authority over digital data. Some critical issues raised would be: Is there privacy in our data? As programmers, can we programme people’s behavior? Will there be ‘no’ social life? (Empty house) Is everything in life nothing but a game? (Gamefication of all our actions)

16

Nov

Synthetic Food Iteration 4 – An additional nudge

Even though the final submission is over. I thought that the assignment titled ‘Synthetic Foods’ could be given another nudge. This time, to add to the ambiguity of the interactions, sounds are included to further stimulate users to construct opinions and speculations about synthetic foods.

Just for fun!

You can interact with the flash file here. Go grab a coffee and come back to allow it time to load! 🙂

27

Oct

Assignment 4 | Synthetic Foods Final

Snapshot A: Empty Plate

Snapshot B: Egg

Snapshot C: Veges?!

Snapshot D: Pieces of Meat

INTERACTION ART | SYNTHETIC FOODS
By Janelle Lim

To interact with the piece, you can choose to download the file or view it in your browser.

____________________________________________________________________________

THE PROCESS

The Cleaner Look
I was not feeling very comfortable with the lighting and colour scheme of the Iteration 1. Hence, I went in search for white tables with windows to give a brighter and cleaner/Whiter look to the piece.

Finding the Appropriate Food
I envisaged the food to be in cubes, creating them in photoshop will not make them real. Fortunately, I chanced upon the rubix cube sandwich.

Photo from Insanewiches.com

The next step was to crack your head to think about what are the available foods that comes in huge blocks, sufficiently big enough for me to shape it at least 4cm X 4cm X 4cm in size.

The winners are cheese, tofu and… FRUITELLA?!

Setting up

From chunks to pieces

The Setup

I could have easily taken shots of all the different types of foods using free hands, however, a tripod is critical in ensuring the exact same spot is captured so as to allow smooth transitions during interactions. Unfortunately, I forgot to carry with my the tripod and had to make do with everything I could find around me to support the camera. It was the best I could go.

Post-editting
Here’s how Fruitella managed to be a part of.

Fruitella undergoes transformation

Creating colour overlay to represent the different coloured markers

Creating Interactions with Flash Catalyst – Problems and  Challenges
1. Overcoming limited states

Flash Catalyst only affords the application to have 20 states, which in fact is extremely limiting, considering how I have 3 states (Full piece, pieces, one small piece) for each category of food (Egg, Meat, Veges). On top of that each state was conceptualised to be able to change to one of the six colours at any one time. That would bring the calculation to (3 x 3) X 7 (6 Filters and one original) +1 (Empty State) = 64 states!? That far exceeds the limit. In order to overcome this, I used Custom Components, which enables me to fit all the filters into one state. Hence, it leaves me with just 10 states.

2. Overcoming Huge File size

I may have overcome the restriction of the number of states that Flash Catalyst imposed on, but I wasn’t able to run away from the huge file size! There were so many different states on top of the innumerable interactions. Somethings have to go. I decided to drop showing states with one small piece of food:

 

 

3. Problems with Transparency

Somehow, Flash Catalyst is unable to detect transparency. As a result, using several filters at one time will bring to surface a weird darkened box around it. Accordingly, I can only use a maximum of 2 colour filters at any one time.

4. No ‘History’

I found myself abandoning my working project to start a new one many times simply because there is no means for me to go back time like what Photoshop and Illustrator affords. Sometimes, I only realised a mistake halfway through and it would be more efficient for me to start from scratch than to spend the time figuring out how it works.

5. Rigid user interface

As expected, I would have countless number of interactions given that I have so many states. For those who love numbers, it would be (2 x 3 x 7 + 1)! which is equivalent to 6.04 X 10⁵²!?!?! My point is that one button can lead many states into one. In the case where all but one state leads to State A as directed by a button, I will need to manually key in the interaction one by one. There can actually be a faster way for users, such as to have check boxes instead of a drop down list that affords the selection of just one item in the “When in” option.

SPOILERS

People whom I have tested the application with had more fun after they figured how the application works. So here are some vague hints to prevent you from getting frustrated and encouraging you to have some fun!

The food can be erased.
The food can be chopped
The food can be served with artificial flavourings.

_____________________________________________________________________________

Synthetic Foods

The world is frantically searching for a sustainable source of food while scientists in the labs research and experiment to ease worldwide concerns over food shortages. Should the day where our food is solely delivered from the labs, how will it be like? How will they look? Perhaps they will all be in cubes for space efficiency. Will our food be appetising? How will they taste? Given that they are all synthetic and artificial, will not eggs be fundamentally fish as well? Following that thought, are fishes like vegetables? Will they all taste the same? Can we distinguish one from the other? But one thing can be more certain than the rest, we can do away with kitchens since there is no longer a need to cook! Moms can stop worrying about putting together perfect dinners. Now that would also save us some space in times when land is scarce and the world becomes overpopulated!

27

Oct

Synthetic Foods | Iteration 2

 

I tidied the visual of the interaction piece based on the critique received:

  • Remove all unnecessary items at the background
  • Since I cant change or move my in-built notice board away, I chose only to used White/Blue notices to allow it to blend nicely into the noticeboard
  • The chopping board is replaced with a plate to better communicate ‘a meal’
  • Only six markets are selected and used among the whole range of stationery I had in Iteration 1

I chose to use coloured markers not only because they are visually unifying to represent stationery, but also contribute to envisaging a time where food colouring comes in the form of markers. Who knows? It may be a new art of garnishing food.

 

27

Oct

Synthetic Foods | Critique

Snapshot of Iteration 1

Synthetic Foods

The world is frantically searching for a sustainable source of food while scientists in the labs research and experiment to ease worldwide concerns over food shortages. Should the day where our food is solely delivered from the labs, how will it be like? How will they look? Perhaps they will all be in cubes for space efficiency. Will our food be appetising? How will they taste? Given that they are all synthetic and artificial, will not eggs be fundamentally fish as well? Following that thought, are fishes like vegetables? Will they all taste the same? Can we distinguish one from the other? But one thing can be more certain than the rest, we can do away with kitchens since there is no longer a need to cook! Moms can stop worrying about putting together perfect dinners. Now that would also save us some space in times when land is scarce and the world becomes overpopulated!

______________________________________________________________

Conceptual Framework
Clicking on the baskets of food would serve food (represented by the cube) on the plate. Cognitively, users would expect each category of food (fish, eggs, veges) to be consistently presented with a type (shape, colour, size, etc) of visual. Using poetic interaction, I’ve enabled every category of food to produce a food cube of any colour at random, (Eg. Veg can be green, purple or blue, so can the fish and eggs) in an attempt to nudge users to question the food itself. Was it fish that I clicked? Or was it the Veges? Why do they look the same?

Apart from ambiguity, poetic interaction also involved defamiliarisation. In terms of HCI, I’ve ‘disabled’ the selected object (veges basket in the picture above) when the mouse rolls over. Ideally, users would take a step back to question if the area is fact ‘clickable’ or would initiate a feedback, thereby bringing to conscious how we usually assume greyscale items to be ‘disabled’ on the web. In terms of concept, I have set the food and chopping board at a study table. This ‘disjointed scenarios’ is a defamiliarisation of the activities that associate with study table or workspaces as well as the context for food preparation. With all foods turning synthetic, is there really a need for a kitchen? Perhaps the new found convenience will find us eating wherever and whenever.

Limitations and Feedback

The main and most important feedback received in the lack of ambiguity and defamiliarisation in the HCI-kinda interaction. There is insufficient poetic interaction for that matter. Of secondary importance but still extremely essential is to improve the visual representation of the cube as well as to turn down the myriad colour currently existing in the background of the piece.

27

Oct

Poetic Interaction | Ideation

Any concept can be made into a poetic interaction piece. Hence, I decided to confine my thoughts and delve into ‘Art meets Science’, the theme for our Final Project.

The Research Process
In search for inspiration

The first thing that comes to mind when I think of ‘Art meets Science’ is Critical Design. And where else to look first than the pioneers of critical design? I browsed through the projects by Dunne and Raby, the Quantified Self community, as well as the Centre of Gastronomy in hope for their great ideas to rub off me. Sadly none. But one thing that really intrigued me was the future of foods, which people at the Centre of Gastronomy love talking about. This brought me to recall a really interesting project by Matt Brown called ‘Food and the Future of it’. He has beautifully crafted what he envision to be the possible future of foods, with Egg Printers, Food cartridges etc.

Hence, I decided to explore the food of our future, specifically, synthetic foods. In the day where overpopulation becomes a pressing issue, a million times more than it is today, what will be the food that can sustain human population? With current development of science affording the mapping of genetic sequences, as well as the prevalence of genetically modified foods as well as the use of artificial flavourings, it is not difficult to imagine a world where our food comes from the lab. How will synthetic food taste? How will they look like? How will they change human society and it’s social forces? Are humans made to eat synthetic food? There must be reasons why the Europeans stand so strongly against GM foods.

The Research Process
In understanding Poetic Interaction

Poetic interaction is a term absolutely new to me, and so… I turned to Google for a better understanding. I came across this a blogpost quoting John Kolko on the framework of Poetic interaction:

The ‘Poetic Interaction’ Framework

One of the most fascinating (albeit underdeveloped) parts of the book is Kolko’s “poetic” model of interaction. He writes, “An interaction occurs in the conceptual space between a person and an object. It is at once physical, cognitive, and social. A poetic interaction is one that resonates immediately but yet continues to inform later—it is one that causes reflection, and one that relies heavily on a state of emotional awareness. Additionally, a poetic interaction is one that is nearly always subtle, yet mindful” (104). Kolko claims that what amount to the ‘common requisites’ of poetic interaction are “honesty, mindfulness, and a vivid and refined attention to sensory detail” (105).

That’s a lot to chew on. This quote reminds me that a poetic interaction enables users to reflect upon their interaction that they may have taken for granted. Not only that, it illuminates the fact that poetic interaction can occur in different spaces: Physical, Cognitive, Social. This thought inspires me to consider the poetic interaction between a person and an art piece through:

1. Cognition

  • Concept: The idea, eg What’s synthetic food, and along with the various questions mentioned above. Of course, many of the other points below may fall into this category, all connected to portray and challenge the assumptions of a concept/interaction
  • Sound: Possible sounds that decontextualise objects or causes users to question the source and meaning behind sounds

2. Physical

  • Physical interaction as in user interaction design (eg. Clicking of mouse, Roll over etc), or the very act of carrying out an action physically

3. Social

  • Context: What is the societal environment like?
  • Relationships: How does one person relate to another?

Of course I wouldn’t say I know enough to write a book on Poetic interaction. Some of the above may be theoretically valid while others are questionable. In any case, they are platforms to launch my thought process in creating Assignment 4.

Blog at WordPress.com.