Now I have enough verified material to write the profile accurately.
Sougwen Chung: Drawing at the Threshold of Agency
Context
At a moment when artificial intelligence has become impossible to ignore — and easy to misunderstand — Sougwen Chung's work arrives as a quiet rebuke to both techno-utopianism and reflexive fear. Since 2015, the Chinese-Canadian artist and researcher has developed one of the most sustained, methodologically rigorous practices in human-machine co-creation: not generating images with AI, but drawing alongside robots, and asking what that partnership reveals about authorship, perception, and what it means to make a mark at all.
The world caught up to her slowly, then all at once. In 2022, the Victoria and Albert Museum in London acquired Memory (D.O.U.G._2, 2017) — described as the first AI model collected by a major cultural institution. In 2023, TIME named her to its inaugural TIME100 AI list and awarded her a TIME100 Impact Award for "pioneering work combining painting and robotics." Her decade-long project, the Drawing Operations Unit: Generation series (D.O.U.G.), now encompasses six distinct robotic generations and has been exhibited at the V&A, Vancouver Art Gallery, Kunstmuseum Basel, ArtScience Museum Singapore, and MAMCO Geneva, among others. She is no longer a curiosity at the margins of contemporary art. She is a reference point.
Background
Chung grew up between Toronto, Canada, and Hong Kong, shaped by what she has described as a household steeped in performance: her father was an opera singer (Interalia Magazine, n.d.). That early immersion in disciplined, embodied practice — and in the overlap between technique and expression — runs visibly through her later work. She began drawing obsessively as a child and has maintained that practice across decades, accumulating a personal archive that would eventually become both raw material and training data.
She completed a Bachelor of Fine Arts at Indiana University Bloomington and a Masters Diploma in Interactive Art at Hyper Island in Sweden (Wikipedia, 2024). In 2015, she joined the MIT Media Lab as a researcher, an affiliation that gave her institutional infrastructure and a conceptual framework for what she was already exploring: the feedback dynamics between human gesture and machine response. Around this period she also held an Artist-in-Residence position at Bell Labs, where she worked at the intersection of virtual reality drawing, biometrics, and machine learning (Wikipedia, 2024). She subsequently founded SCILICET, her research studio, and was an inaugural member of NEW INC, the New Museum's technology and art incubator.
Her TED Talk, "Why I Teach Robots to Paint with Me" (2019), brought this body of work to a broad public audience for the first time, framing the robot not as a threat to artistic authorship but as a collaborator whose behavior reflects, and is shaped by, the artist's own history.
The Work
D.O.U.G. (Drawing Operations Unit: Generation), 2015–present
The D.O.U.G. series is the spine of Chung's practice — a multi-generational system in which each iteration introduces a new modality of human-machine interaction.
D.O.U.G._1 (2015) is the origin point: a single robotic arm equipped with computer vision that mirrors Chung's hand gestures in real time. The robot draws because she draws, synchronously, on the same surface. It is mimicry as premise, establishing the core question: if the machine follows me, is it collaborating?
D.O.U.G._2: Memory (2017) complicates the answer. Here, recurrent neural networks trained on two decades of Chung's personal drawings allow the robot to generate marks autonomously — drawing not from live observation but from internalized history. "I'm collaborating with 2 decades of my drawing as remembered by a machine," Chung has said (Interalia Magazine, n.d.). This version was acquired by the V&A in 2022.
D.O.U.G._3 / D.O.U.G._L.A.S. and the Omnia per Omnia (2018) performance expands the frame outward: a swarm of robots trained on motion data drawn from New York City surveillance footage, whose movements intertwine Chung's line-making with the anonymous kinetic patterns of urban life. The piece interrogates what it means to draw with — and from — collective, non-consensual data.
D.O.U.G._4: Spectral introduces EEG biofeedback, translating Chung's brainwave signals into robotic movement. The system targets alpha-wave states — associated with relaxed creative attention, or "flow" — and feeds them to the robotic painting arms. "I wanted to build a relational, robotic system to influence an internal process, a reinforcing configuration," Chung has said of this work (Interalia Magazine, n.d.). The V&A's Art Newspaper profile describes the result as "a reinforcement loop that manifests between the robotic painting and my own biofeedback," in which rising alpha levels generate increasingly adaptive robotic behavior (The Art Newspaper, 2025).
D.O.U.G._5 and D.O.U.G._6 extend the system into integrated biofeedback-and-drawing assemblies and spatial/sculptural mark-making, respectively (Wikipedia, 2024; sougwen.com, n.d.).
OMNIA per OMNIA is documented on Chung's website as a large-scale kinetic installation involving multi-robotic drawing operations, building on the swarm-intelligence explored in the D.O.U.G._L.A.S. work (sougwen.com, n.d.). 🚩 Detailed technical specifications for the OMNIA per OMNIA installation are not confirmed from the fetched source material and should be independently verified before publication.
Critical Engagement
The EEG-based methodology at the heart of Spectral is the most technically specific — and most conceptually loaded — element of Chung's practice. Its achievements are real: it closes a biometric loop between the artist's internal state and the robot's expressive output, moving beyond pre-programmed mimicry or historical training data into something more dynamic. The alpha-wave focus is scientifically grounded; EEG measurement of alpha activity is a well-established correlate of flow states and creative engagement (Csikszentmihalyi, 1990; Kounios & Beeman, 2015). Chung's framing — that the robot's behavior becomes "increasingly adaptive" as her flow state deepens — describes a genuine feedback phenomenon, not merely a metaphor.
The limits of the approach, however, are worth naming clearly. Alpha-wave EEG captures a single-channel, cognitively weighted signal. It privileges what is happening in the skull, rendering visible a particular frequency of neural oscillation while remaining largely silent about the rest of the body. The chest, the gut, the quality of breath, the micro-tensions in the drawing hand, the proprioceptive sense of where the body is in space — these are absent from the channel. In this respect, EEG-based conditioning articulates a sophisticated form of mind-machine interaction while leaving the body's own expressive register largely unaddressed.
This is not a failure of Chung's practice; it is an honest boundary condition of the technology she has chosen. And she acknowledges the broader complexity: "The term collaboration," she has written, "can often obscure the underlying labour involved in mainstream generative systems, while simultaneously implying a mechanical agency to the system" (Interalia Magazine, n.d.). That self-critical awareness is rare in this field, and it points toward the unresolved terrain her work inhabits rather than claiming to have solved it.
Somatic approaches to human-machine interaction — those that draw on interoceptive, proprioceptive, or movement-based signals rather than, or in addition to, cortical EEG — would engage a richer vocabulary of bodily experience: the felt sense, tension-release patterns, breath regulation, the pre-reflective texture of sensation that precedes and shapes conscious attention. These dimensions are not in competition with Chung's framework; they represent a different layer of the same inquiry. What remains open is whether the subtlety of lived bodily experience can be transduced into machine behavior without loss — and whether losing it changes what the resulting marks mean.
Field Significance
Chung matters not only as an artist but as a methodologist. Her insistence on a durational, relational approach — "I approach technological inquiry in a personal, relational, and durational way" (Interalia Magazine, n.d.) — stands against the disposable pace of most AI art, which tends to produce outputs faster than it produces understanding. The D.O.U.G. series is now a decade old. It has grown with her, changed with her, and accumulated the kind of genuine historical depth that most human-machine collaborations lack.
The V&A acquisition signals institutional recognition that human-machine co-creation is not a temporary category but a permanent expansion of what art can be and document. Chung's specific contribution is to have insisted, from the beginning, that the machine's behavior be shaped by something genuinely personal — not a text prompt, not a public dataset, but decades of mark-making, embodied rhythms, and live neural activity. The question her practice leaves open for the field is how much further that personalization can go — and how much of the body has yet to speak.
Works Referenced
Chung, S. (2019). Why I teach robots to paint with me [TED Talk]. TED. https://www.ted.com/talks/sougwen_chung_why_i_teach_robots_to_paint_with_me
Chung, S. (n.d.). Drawing operations. sougwen.com. https://sougwen.com
Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. Harper & Row.
Interalia Magazine. (n.d.). Human-machine collaborations: Sougwen Chung. https://www.interaliamag.org/interviews/sougwen-chung-human-machine-collaborations/
Kounios, J., & Beeman, M. (2015). The eureka factor: Aha moments, creative insight, and the brain. Random House.
MIT Docubase. (n.d.). Drawing operations. Massachusetts Institute of Technology. https://docubase.mit.edu/project/drawing-operations/
The Art Newspaper. (2025, January 17). Sougwen Chung: Meet the boundary-pushing pioneer of robot art. https://www.theartnewspaper.com/2025/01/17/sougwen-chung-meet-the-boundary-pushing-pioneer-of-robot-art
Wikipedia. (2024). Sougwen Chung. Wikimedia Foundation. https://en.wikipedia.org/wiki/Sougwen_Chung