Visualizing Perception and Value

Screen Shot 2013-10-30 at 2.17.41 PM

How to draw a building

What does it mean to “like” a building? How do perceptions and value judgements change our interaction with the built environment?

This project explores the relationship between perception and judgement by asking users to complete a simple task: click on their favorite part of the building, in this case an image. The goal is to be able to materialize these individual assessments in the form of aggregated drawings.

PROCESS:

The first phase of the project required creating a method of drawing based on user interaction. A trace of the building was created in illustrator, and points were pulled from that trace and entered into the code.

spl_trace

Next, another code was added that interpreted the distance from a mouse click to the closest point, generating a disturbance in the line.

After achieving a working code, it became necessary to separate the the image from the generated drawing, addressing the need to mask the end product from the user. This required defining two individual canvases, one to hold the image and one to hold the new drawing. Eventually the canvas on the right will be projected, while the image on the left will be part of the user interface.

Screen Shot 2013-11-05 at 9.20.07 PM

The next step will be to create a means for storing and distributing the generated drawings. Currently, the drawings are lost with a refresh or close of the window. Preview the work in progress by clicking on “index(finalB)” below (click anywhere on image to generate lines, press enter to advance to next image):

index(finalB)

finalB

UPDATE: 11/12

Screen Shot 2013-11-12 at 9.01.53 PM

This phase included adding more canvases. With the final presentation in mind, drawings have been separated in order to be “captured.” This is seen in the image above: the line drawing on the top right is an aggregation of all drawings, the image on the bottom left is the line drawing from the first image, and the line drawing on the bottom right is from the second image. Dots have also been added to help the user identify areas clicked both in the original image and the final line drawings. For the next step, final point inputs from source drawings will need to be completed, and more functions of the line will be questioned.

UPDATE: 11/19

The project is beginning to enter the “fine tuning” stage from a programming standpoint. As far as the main process, the code is functioning at the desired level (mostly). A small but important bit of code was added this week: instead of pressing “enter” to proceed to the next image, the program will automatically progress with a predetermined amount of mouse clicks, currently set at 10. This number can be adjusted, but now the end result will be drawings populated with more information, rather than drawings with one line/dot.

The design is currently shifting to user navigation and interaction. The original concept revolved around an interactive program in which users participated actively in the setting of an installation, generating drawings in real time. I have decided, however, to push the initial questions of perception and judgment back into the human realm.

Over the next week I will begin to gather participants and record their computer drawings via this project. After generating computer drawings, the participants will be asked to to interpret and draw, by hand, what the computer (through their choosing of areas that interest them on each image) has generated. This provides an opportunity for the user to question and express their original, now morphed, perception of the building images, perhaps discovering something about their first impression of each image.

FINAL:

IMG_4772

 

Final presentation included first images, programmed interpretation, and hand drawn interpretation. 6 individuals, mostly fellow architecture grad students, were chosen to use the program, with widely varying results.

IMG_4778

spl

sanfranfed bpl

ica

slc