PLAN FOR EXHIBITION

Moholy-Nagy’s assertion that “the main requirement of an exhibition is activity, flow, effective visual demonstration, and easy communication” profoundly influenced my preparation for the exhibition. Months ago, I conceptualized a user interface that would react to users’ movements via a webcam, dynamically displaying information on the screen. The idea involved placing different objects on a table, which users could hold up to the camera to receive data-driven information in real time.

This concept directly shaped the design and organization of my exhibition, ensuring an interactive and engaging experience. By aligning with Moholy-Nagy’s principles, I aimed to create a display that was not only visually effective but also intuitive.

HAND TRACKING DATA VISZUALIZER

I implemented the Mediapipe hand-tracking algorithm to enhance the interactivity of the DATA VISION process, making it more intuitive and user-friendly. The algorithm utilizes a machine learning model trained on diverse hand shapes and sizes to track hand movements frame by frame in real time. Each fingertip is represented as a district node, and I developed custom code to calculate the distance between the index and thumb nodes, dynamically determining object size. This interaction creates the effect of the user “zooming in” on a holographic object, adding a layer of simplicity and futurism to the data visualization experience.

POINT CLOUD CONVERTER

I developed a script in Blender to convert images into .obj files, which are subsequently transformed into point cloud data using a process called geometry nodes, these separate the pixels in images into 3D points. These points are moved to a dataset and are then integrated into DATA VISION visualizer. In this example, we utilize Chalayan’s SS2000 look to represent the intersection of fashion and furniture design through the lens of data-driven visualization.

blender script

GLOSSARY OF TERMS

Here’s a glossary of the terms mentioned in my Evaluative Report:

Glossary of Terms

YOLO (You Only Look Once): A real-time object detection system that uses deep learning to identify and classify objects within images or videos. YOLO is known for its speed and accuracy in detecting multiple objects simultaneously.

COCO (Common Objects in Context): A large-scale dataset used for object detection, segmentation, and image captioning. It includes over 320,000 images with 80 object categories, providing a benchmark for training machine learning algorithms in computer vision.

Mediapipe: An open-source framework developed by Google for building multimodal applications. It provides solutions for hand tracking, facial recognition, object detection, and more, enabling real-time processing of sensor and visual data.

Point Cloud: A collection of data points in 3D space that represent the external surface of an object or environment. It is often used in 3D scanning, modeling, and visualization to create digital representations of physical objects.

Difference Between Machine Learning and AI:

Artificial Intelligence (AI) is a broader concept that refers to machines designed to simulate human intelligence, capable of performing tasks like problem-solving and decision-making.

Machine Learning (ML) is a subset of AI focused on enabling systems to learn from data without explicit programming. ML uses algorithms to identify patterns and make predictions based on data.

Open Source: Software that is freely available for anyone to use, modify, and distribute. The source code is accessible to the public, encouraging collaboration, customization, and transparency in development.

Zettabyte: A unit of digital information storage equivalent to one trillion gigabytes (or 1,000 exabytes). It is used to measure massive amounts of data, such as global internet traffic or large-scale datasets.

UI (User Interface): The interface through which users interact with a computer system or application. UI design focuses on making digital tools intuitive and user-friendly, balancing aesthetics with functionality.

Inside-Out Architecture: A design concept By Richard Rodgers that emphasizes exposing the internal structure and functionality of a building or object, making typically hidden elements visible to the user. It aims to create transparency and foster a deeper understanding of how a system or space operates.

Surveillance State Paranoia

I conducted field research across London to evaluate the accessibility and prevalence of data collection instruments that reinforce the widespread perception of surveillance in the city. Within a single afternoon, I identified three distinct examples of “Orwellian” data collection practices, highlighting the pervasive and visible nature of surveillance technologies in the urban landscape.

DATAVISION User-Interface DEMO

DEMONSTRATION OF USER-INTERFACE DESIGNED FOR DATAVISION. FULLY CODED IN PYTHON RUNNING LOCALLY. THIS IS ROUND ONE OF GUI DESIGN I WILL BE ADDING MORE FEATURES AS DESCRIBED IN PROJECT OUTLINE. CURRENTLY THE THE SOFTWARE WORKS BY USING STATE OF THE ART YOLOV10 TO IDENTIFY OBJECTS. THESE OBJECT ARE THEN GIVEN DESCRIPTORS TO HUMANISE THE PROCESS OF MACHINE LEARNING DETECTION. REFERENCE IMAGES AND FUTHER TOOLS THAT ASSIST IN PEDAGOGY WILL BE IMPLEMENTED BEFORE THE END OF THE WEEK!!!

THE CALL – DATA DRIVEN EXHIBITION

“If all media is training data, including art, let’s turn the production of training data into art instead.”

– Holly Herndon & Mat Dryhurst

TODAY I VISITED THE CALL EXHIBITION IN THE SERPENTINE GALLERY. THE ARTISIT USED TRAINING DATA IN REAL TIME TO MATCH AUDIENCE PITCH AND TONE IN A MIC TO THAT OF REAL RECORDINGS OF TRAINED SINGERS. THE PITCH AND TONE WERE AUTONOMOUSLY SYNCED USING MACHINE LEARNING TO CREATE A NEO AUTOTUNE USING ML ALGORITHMS. THE GRAPHIC PROCESSING UNITS (GPU) RESPONSIBLE FOR THE PARALLEL PROCESSING WAS ERECTED AT THE CENTRE OF THE EXHIBITION AND ITS COOLING FANS INTRICATELY PLACES AHEAD OF WIND PIPE INSTRUMENTS TO CREATE A MAN-MADE-ARTIFICIAL-INTELLIGENT MUSIC. WAS THE SYMPHONY WHICH REVERBERATED THROUGH THE HALLS ARTIFICIAL OR NATURAL? OR WAS IT A TRANSHUMANISTIC EXPRESSION OF THE TWO. The simplicity brought me to tears. This EXHIBITION GAVE ME A FIRM REALITY CHECK TO THE STATE OF MY PROJECT WHICH I BELIEVE MIGHT BE TOO AMBITIOUS FOR TIME I HAVE LEFT. I WENT BACK TO A REFLECTIVE STATE TO IMPLORE A MORE STRATEGIC AND FOCUSED INTENTION USING DATAVISION

PROTOPATTERN

DEMO_1

This application I developed for physical intervention workshops is an interactive design tool that allows users to customize fashion patterns by adjusting the shape using sliders. It dynamically updates the SVG pattern based on user input, enabling algorithmic real-time visualization of design changes. The tool is ideal for pattern designers and fashion designers looking to fine-tune garment shapes with precision and ease. This tool was created for data decompartmentalization and showing the potential of data driven design.

Death to Bureaucracy

Ikiru (“To Live”) is a 1952 Film by Akira Kurosawa and serves as the canvas for this data driven intervention. The film famously tackles the theme of bureaucracy and how it can cause one to waste their life way. This concept terrifies me, in creative industry the same sentiment rings true. we must envision a break down to the barrier of bureaucratic behaviours and establish open forms of education and accessibility. Data Vision attempts to establish pedagogy and referencing accessibility to amateurs or professionals. Data in this case has the ability to identify characteristics that strip away all redundancy leaving a pure form of expression to be interpretable and hopefully recognisable. I see the themes Kurosawa evokes in his message and believe they reflect the intention a computer has when learning to see for the first time. It transforms into a tool to fight against wasted time. The shaky instability of the detection algorithm mimics the way a child stumbles when learning to walk. In 20 years a quantum computer will be able to detect every wrinkle that appears in this moving image