
Iteration 1
In our first iteration, we roughly produced the four panels:
Panel 1: Programmer’s perspective – Code view
Panel 2: Programmer’s perspective – CCTV view
Panel 3: User’s perspective – User first person view with augmented reality
Panel 4: User’s perspective – User interface affording the selection of visible information
For our concept, we wanted to show how the system can control the thoughts and behavior of the user. Therefore, we wanted the system to have several suggestions to the user. Upon selecting and agreeing to the suggestions, the user would act and think as according to the system. This questions our autonomy to think should we be so reliant on digital information. This assumption is not far from reality. As it is, people are already taking the things that they read online as facts. We also wanted to show how the system would usually display wise and positive suggestions. However, when the system gets hacked (interaction in Panel 1), these suggestions will change to display nasty things.
Due to the limitations of time to produce our first iteration, we decided to shoot only one scene to test it out. In this scene, we have Zhi Kai and Nori studying. The scene is supposed to depict Zhi Kai stuck in his homework. As a result, the system would give Zhi Kai suggestions as to how to overcome the roadblock. Such suggestions could be, “Google certain term” and so on. When the system gets hacked, it would instead instigate Zhikai to “Copy/Plagarise Nori’s Homework” In the above, we were only able to shoot the scene where the system suggests to the user to take Nori’s homework. Upon agreeing, by clicking the button Panel 4, the video footage will play in Panel 2 and 4, as if the user himself is following through the action.
Even though our group planned to shoot more video footages to display the choices of actions, as well as the message that we intended, we realised that time may not permit because Zhi Kai would need to programme it from scratch. Also, according to the suggestions of Jing, we decided to work with what we have.
During the critique session, we received feedback that the data displayed were too static and boring. It would be more captivating if we include more dynamic data, as well as more interactions on Panel 3 itself. It was also suggested that we should include more intrusive data. Also, the class had difficulty differentiating the two perspectives that we wanted to portray. Particularly, they were unable to see Panel 2 as a CCTV view screen.
For this iteration, I focused on creating Panel 4, the user interface.
Iteration 2

Using the suggestions given by the class, we made the following improvement for our second iteration:
1. Differentiating the two user’s perspective and programmer’s perpspective
- Adding an overlay on CCTV screen (Panel 2)
An overlay is placed over the CCTV screen with words that simulate actual screens. (eg. ‘Tracking’, or time and date) There are also additional blue lines to create a more futuristic effect.
- Adding a blueprint on Panel 4
We wanted to make Panel 3 and Panel 4 look more connected and as if the user sees both these two screens in their vision at one time. We initially thought of taking a photo of the actual scene to place it as background to Panel 4 so that Panel 3 & 4, when seen together would look as though its one person’s vision. However, we realised it would be difficult to get the perfect lighting and angle that matches the video footage in Panel 3. Hence, to make do, we decided to add in the blueprint that acts as a continuation to the vision in Panel 3. I used Adobe illustrator to draw the blueprint.

In the midst of drawing the blueprint

Final Blueprint used at the backdrop of Panel 4
2. More instrusive data
- In better highlight the issue of privacy, we have also included more intrusive data. Most prominently, we included the category ‘Health’ from Iteration 1.
- We have also included advertisements that appears as according to the type of information that the user choose to see. The advertisements intrudes the user’s vision and we have programmed it such that it cannot be removed. This is to show how we can be slave to technology, where we exchange our information, and perhaps autonomy for the convenience and benefits of technology. In this case, it would be the use of the information system. It is somewhat like what information marketing giants Google and Facebook are doing; we agree to their terms and conditions in exchange for the use of their service.
- Here is an overview of the all the data I have prepared for the project (They were written when I put on my nonsensical mind):




3. More dynamic visuals and interactions
- Instead of having the data plainly appearing when users select a category on Panel 4. Nori, Amalina and Zhi Kai have prepared the mind reader, mood detector and health detector icons that would appear on Panel 3 when you mouse over the subject in Panel 3, and when you roll over the respective categories in Panel 4.
- The thoughts of the subject were also made to run, showing one at a time, this makes the whole piece more dynamic and interesting.
4. Miscellaneous
We have also added other small elements to refine the whole project and make the visuals more realistic. This includes:
- Neutron Spinning icon in Panel 1
- Watermark in Panel 1
- Category Icon in Panel 3
For this critique session, we have guests from the CNM department to give us some feedback. Specifically, Prof Anne Marie interacted with the piece and commented on the intrusiveness of the data, ‘Isn’t it scary?’ This rightly demonstrate the kind of reaction that our group seeks to achieve with our piece.
Chris mentioned that there was no much interaction between Panel 2 and the rest. Our group attempted to resolve this issue but realised that it is conceptually incorrect for any interaction on Panel 2 to affect the other panels. Fundamentally, Panel 2 is a CCTV screen which programmers used to observe the changes the he has made in the coding screen. Users, whom the CCTV is ‘safeguarding’ obvious cannot change anything on the CCTV, unless he himself moves around and his motion is tracked by the CCTV. Hence, our group left the absence of interaction between the Panel 2 and the rest so as to preserve the concept of our piece.
We have also observed that people interacting with our piece tend very much to click on the square boxes that appears one Panel 2 when there are no clickable interactions programmed for that. In our attempt to fix this problem, we came out with three solutions.
- Stop motion video of each item
We wanted a stop motion video of the item spinning and showing its data whenever users click on it. It would appear as if the item is floating and spinning in thin air. However, due to time constraints and inadequate equiptment, we were unable to execute this
- Ascii art
We wanted for an ascii art of the selected item to appear in the code view screen (Panel 1) when the item is clicked on the CCTV view (Panel 2). After experimenting with this option, we realised that this would distract users from reading the intrusive data that is meaningful to our concept and art piece. I used an online Ascii Art generator to produced the necessary images and altered them according to the needs and visual feel of our project.

Ascii Art of fire extinguisher

Ascii Art of an iPhone
- Provide subtle feedback
Eventually, our group resorted to selectively show some information about the objects when users roll over them, and upon clicking, additional information will be revealed. This solves our problem as it gives users a form of feedback and increases user interactivity in the panel.