PLAN FOR EXHIBITION

Moholy-Nagy’s assertion that “the main requirement of an exhibition is activity, flow, effective visual demonstration, and easy communication” profoundly influenced my preparation for the exhibition. Months ago, I conceptualized a user interface that would react to users’ movements via a webcam, dynamically displaying information on the screen. The idea involved placing different objects on a table, which users could hold up to the camera to receive data-driven information in real time.

This concept directly shaped the design and organization of my exhibition, ensuring an interactive and engaging experience. By aligning with Moholy-Nagy’s principles, I aimed to create a display that was not only visually effective but also intuitive.

HAND TRACKING DATA VISZUALIZER

I implemented the Mediapipe hand-tracking algorithm to enhance the interactivity of the DATA VISION process, making it more intuitive and user-friendly. The algorithm utilizes a machine learning model trained on diverse hand shapes and sizes to track hand movements frame by frame in real time. Each fingertip is represented as a district node, and I developed custom code to calculate the distance between the index and thumb nodes, dynamically determining object size. This interaction creates the effect of the user “zooming in” on a holographic object, adding a layer of simplicity and futurism to the data visualization experience.

POINT CLOUD CONVERTER

I developed a script in Blender to convert images into .obj files, which are subsequently transformed into point cloud data using a process called geometry nodes, these separate the pixels in images into 3D points. These points are moved to a dataset and are then integrated into DATA VISION visualizer. In this example, we utilize Chalayan’s SS2000 look to represent the intersection of fashion and furniture design through the lens of data-driven visualization.

blender script

GLOSSARY OF TERMS

Here’s a glossary of the terms mentioned in my Evaluative Report:

Glossary of Terms

YOLO (You Only Look Once): A real-time object detection system that uses deep learning to identify and classify objects within images or videos. YOLO is known for its speed and accuracy in detecting multiple objects simultaneously.

COCO (Common Objects in Context): A large-scale dataset used for object detection, segmentation, and image captioning. It includes over 320,000 images with 80 object categories, providing a benchmark for training machine learning algorithms in computer vision.

Mediapipe: An open-source framework developed by Google for building multimodal applications. It provides solutions for hand tracking, facial recognition, object detection, and more, enabling real-time processing of sensor and visual data.

Point Cloud: A collection of data points in 3D space that represent the external surface of an object or environment. It is often used in 3D scanning, modeling, and visualization to create digital representations of physical objects.

Difference Between Machine Learning and AI:

Artificial Intelligence (AI) is a broader concept that refers to machines designed to simulate human intelligence, capable of performing tasks like problem-solving and decision-making.

Machine Learning (ML) is a subset of AI focused on enabling systems to learn from data without explicit programming. ML uses algorithms to identify patterns and make predictions based on data.

Open Source: Software that is freely available for anyone to use, modify, and distribute. The source code is accessible to the public, encouraging collaboration, customization, and transparency in development.

Zettabyte: A unit of digital information storage equivalent to one trillion gigabytes (or 1,000 exabytes). It is used to measure massive amounts of data, such as global internet traffic or large-scale datasets.

UI (User Interface): The interface through which users interact with a computer system or application. UI design focuses on making digital tools intuitive and user-friendly, balancing aesthetics with functionality.

Inside-Out Architecture: A design concept By Richard Rodgers that emphasizes exposing the internal structure and functionality of a building or object, making typically hidden elements visible to the user. It aims to create transparency and foster a deeper understanding of how a system or space operates.

Surveillance State Paranoia

I conducted field research across London to evaluate the accessibility and prevalence of data collection instruments that reinforce the widespread perception of surveillance in the city. Within a single afternoon, I identified three distinct examples of “Orwellian” data collection practices, highlighting the pervasive and visible nature of surveillance technologies in the urban landscape.