Living in an Algorithmic Error: A Disabled Cyborg Perspective on AI
Laura Forlano
Northeastern University, Boston, USA
This article offers a disabled perspective on the ways in which AI systems (and their failures) are experienced by offering an account of living with a “smart” medical device for over ten years. By bringing together autoethnographic vignettes with transcribed data from machines and examples from crip making practice, the article illustrates the ways in which the body is rendered into both a hopeful and harmful testbed. The article proposes that the field of design take a “critical data studies” perspective to better understand the social consequences of technology.
Keywords – Disability, AI Systems, Failure.
Relevance to Design Practice – This article is relevant to design practice because it grounds experiences with AI systems in the context of everyday life with machines in order to expose the social consequences of design interventions.
Citation: Forlano, L. (2026). Living in an algorithmic error: A disabled cyborg perspective on AI. International Journal of Design, 20(1), 77-82. https://doi.org/10.57698/v20i1.07
Received March 27, 2026; Accepted April 20, 2026; Published April 30, 2026.
Copyright: © 2026 Forlano. Copyright for this article is retained by the author, with first publication rights granted to the International Journal of Design. All journal content is open-access and allowed to be shared and adapted in accordance with the Creative Commons Attribution 4.0 International (CC BY 4.0) License.
*Corresponding Author: l.forlano@northeastern.edu
Laura Forlano is a Fulbright award-winning and National Science Foundation funded scholar, a disabled writer, social scientist, and design researcher. She is a professor in the departments of Art + Design and Communication Studies in the College of Arts, Media, and Design and Senior Fellow at The Burnes Center for Social Change at Northeastern University. She is the author of Cyborg (with Danya Glabau, MIT Press, 2024) and an editor of three books: Bauhaus Futures (MIT Press, 2019), digitalSTS (Princeton University Press, 2019) and From Social Butterfly to Engaged Citizen (MIT Press, 2011). She received her Ph.D. in communications from Columbia University.
I would like to dedicate this essay to Alice Wong, an American disabled writer and activist who passed away in November 2025. Here is a quote from Wong that was shared online by her family: “I’m honored to be your ancestor and believe disabled oracles like us will light the way to the future.” Though I never met Alice Wong, I will miss hearing her voice on podcasts and panels on storytelling, care, and disability justice.
Part 1: Adventures of a Disabled Cyborg: A Data Performance
My name is Laura Forlano. I’m a type 1 diabetic. Type 1 diabetes is an autoimmune condition that destroys cells in the pancreas and thus destroys the body’s ability to make insulin. You’re about to read notifications from a medical device—a “smart” insulin pump and sensor system—that I keep with me at all times.
It keeps me alive.
These are just samples of what I experienced during a typical 30-day period.1
July 1
4:15pm Auto Mode min delivery
4:15pm Auto Mode has been at minimum delivery for 2 point 5 hours. Enter BG to continue in Auto Mode.
8:20pm Auto Mode has been at max delivery for 4 hours. Enter BG to continue in Auto Mode.
July 7
6:06am Calibrate now
7:11am Calibrate now
8:16am Calibrate now
8:56am Sensor expired. Insert a new sensor.
9:34am Lost sensor signal. Move Pump closer to transmitter. May take 15 minutes to find signal. [super happy]
9:34am Sensor connected. If new sensor, select Start New. If not, select Reconnect.
9:39am Sensor warm-up started. Warm-up takes up to 2 hours. You will be notified when calibration is needed. [happy]
11:34am Calibrate now
11:58am BG required [hit the word “required” a bit hard]
3:49pm Alert on low
4:14pm Alert on low [repetition: hit it harder]
6:04pm Calibrate now [step forward or lean] (Each calibration is supposed to last up to six hours. I’ve always considered myself to be a very good sleeper but…This tested me.)
10:19pm Low reservoir. 20 units remaining. Change reservoir.
Part 2: A Disabled Cyborg Perspective on AI
I’m a disabled social scientist and design researcher who has been studying the social implications of emerging technologies such as AI for over 20 years. For the past 10 years, I’ve been writing about and making critical/creative work with my own experience “living (intimately) with” machines—namely, the insulin pump and sensor system that I use to manage type 1 diabetes (Forlano, 2023). I’ve experienced a number of what I consider to be “AI harms,” but, at the same time, I’m alive to share some of my creative work with you today.
Now, I’d like to explain some of what I’ve learned, how I approach thinking about and making with machines, and a few dreams I have for the future of the design field. When Johan Redström suggested the title “Left by our own devices,” I was immediately inspired. While Johan Redström’s essay is situated historically and at the more macro level, mine is very, very micro…because…as you already know…when I think of “my own devices,” my devices are me.
I am a “disabled cyborg.” My devices are inseparable from my body, my subjectivity, and my politics. I use the term “disabled cyborg” as a constant reminder that it is not only me but also my machines that are disabled—full of flaws and failures, gaps and glitches, seams and symptoms, errors and omissions, bugs and biases (Forlano, 2017).
Disabled cyborgs can be both more-than-human (to underscore the relational understanding) and, at the same time, dehumanized. While the medical model of disability locates disability in the patient’s body, the social model locates it in the ways in which social structures—including socio-technical systems and what I call “intimate infrastructures”—exclude and oppress disabled people (Forlano, 2017). In using the label “disabled cyborg,” I extend disability scholar Alison Kafer’s (2013) work—to trouble the location of who or what is a problem or, in this case, an “error.”
My Data Performance traces the experience of living in what communications scholar Mike Ananny (2022) calls an “algorithmic error.” He writes that we need to “distinguish among systems, causes, harms, responsibilities, and remedies whenever data-driven, automated systems fail” (p. 3). He continues: “…algorithmic errors are almost everywhere and increasingly frequent, but they are usually hard to neatly categorize into discrete causes and harms” (p. 6).
In particular, he explains that we need to try to understand “how and why certain people see an algorithmic event as an error…while others see no error at all, just a system working as intended” (p. 9). Finally, he writes:
If algorithmic errors are seen as things that people cannot opt out of, that require collective action, and that create new shared consequences, then algorithmic errors become public problems. Turning algorithmic errors into public problems takes work. It means seeing seemingly private, individual errors in system design, datasets, models, thresholds, testing, and deployments—as well as the funding and imagination that birth such systems—as collective concerns. (Ananny, 2022, p. 21)
I argue that, over time, these errors accumulate in human experience, causing a kind of slow, exhausting, everyday “microviolence.” 2
With the Data Performance, I aim to bring to life the ways in which the dominant assumptions that are evident in the design of these medical devices are often incompatible with what it means to live a dignified life and flourish as a disabled person. Disabled people are often deprived of having any agency to make decisions about our lives. Perhaps this should not be surprising because disabled people—like other historically oppressed and marginalized people—have been and are still today often excluded from the very category of what it means to be human.
For example, in a recent announcement, the U.S. State Department declared that it could deny immigrant visas to people with certain medical conditions, including type 1 diabetes, an obvious example of eugenicist ableism.
In The Future is Disabled, disability activist Leah Lakshmi Piepzna-Samarasinha (2022) writes: “Disabled people aren’t supposed to be alive, take up space, exist…” She goes on to point out, “In so much utopian social justice-oriented science fiction, it’s unquestioned that in the good utopian future, disabled people don’t exist” (pp. 17-18). She continues: “…a major way ableism works is to erase us from ideas of the future,” and she asks us to consider another possibility:
What would a future look like where the vast majority of people were disabled, neurodivergent, Deaf, Mad? What would a world radically shaped by disabled knowledge, culture, love, and connection be like? Have we ever imagined this, not just as a cautionary tale or a scary story, but as a dream? (Piepzna-Samarasinha, 2022, p. 22)
The lived experience of disabled people offers an expansion of our understanding of human difference—not as problems to be solved or fixed but rather as opportunities for questioning our assumptions as designers and broadening our possibilities for creative expression.
Here are a few of the lessons that I’ve learned during my experience as a disabled person living (intimately) with machines for the past 10 years:
First, there is no such thing as “automated.” Sometimes I wasn’t sure whether the machine was taking care of me or whether I was taking care of the machine, a situation that highlights the relational nature of life with machines. Disabled people do not need to be fixed by their machines; rather, they are the fixers of their machines. These technologies need round the clock care, underscoring the fact that there are significant power asymmetries between human and machine. In my case, out of the 286 alerts and alarms that I received in July 2019, only 65 (about 22 percent) were related to an urgent medical issue that needed treatment, and 221 (nearly 80 percent) were commands nudging me to take care of the machine (its functioning, of course, being also an urgent medical issue). All AI systems require human labor of various kinds and amounts. So, who is doing the labor for the AI systems that you are using?
Second, as you probably sensed, my experience living with this particular “smart” system was fraught. These technologies claim to fix the body, but they might be doing so at the expense of degrading the mind. When you hate the machine, but the machine is you, you begin to hate yourself.
Designers of such systems are not merely designing a human-machine relationship; they are designing subjectivity itself, what Anne-Marie Willis (2006) refers to as ontological design. Yet, in the United States, the Federal Drug Administration has asked companies to define “what” a manufacturer intends an algorithm to become as it learns. This is an effort to reduce the regulatory burden on companies, but it also allows for software modifications that may have a very real material impact on people’s everyday lives.
Instead, or, perhaps, in addition, we might also ask “who” future users aim to become. As designers, we must think about humans and machines relationally, interdependently, and co-evolutionarily, understanding that both humans and technologies are dynamically changing over time (without succumbing to reductive transhuman fantasies about humans being uploaded into machines).
Third, historian Paul Edwards, discussing the artificial worlds of science fiction films, writes (1997): “These artificial worlds, for example space stations and cities, may pretend to a self-sufficient autonomy they cannot really possess. Though often darkened, they are rarely still. Technological artifacts within the space assist in projecting an underlying, electric tension: the flickering fluorescent light, the ringing telephone, the active computer screen, the flashing indicators on a CPU. Sleep is fretful and frequently disturbed” (p. 307). And in his book 24/7, critic and essayist Jonathan Crary (2013) writes: “The denial of sleep is the violent dispossession of self by external force, the calculated shattering of an individual” (p. 7). As we know from the work of anthropologist Matthew Wolf-Meyer (2012), our current normative sleep regimes are highly organized around industrial capitalism, and, as a result, sleep disorders are pathologized.
Over the four years that I used that particular system, between 2018 and 2022, I not only developed a genuine fear of sleeping, I also became convinced that when I did sleep, I was “sleeping like a sensor,” in shorter patterns that mimicked the system (Forlano, 2020). This illustrates the ways in which we humans both shape and are shaped by machines. As such, following the ideas of anthropologist Nick Seaver (2017), it makes sense to view AI not as merely a tool but “AI as culture” itself. Even if you do not use it, it is already shaping you. Is “AI refusal” even possible given the extent to which it is already being used by companies keen to cram it into every digital product, platform, and interface?
Fourth, power asymmetries also matter with respect to who designs technology and who uses it. Seemingly small modifications have a significant impact on everyday life, which is constantly being reshaped around software updates. Echoing Orit Halpern et al.’s (2013) idea of “test-bed urbanism” (p. 275), type 1 diabetics experience the body as a testbed.
For example, one version of the “smart” system I’ve used took away certain options that made it more difficult for me to eat pizza (a food that I really happen to like). This suggests the need for greater opportunities for participatory design methods in the development of AI. While participation is not a design fix for AI systems, it is essential for surfacing different questions about the role of design and technology in the future (Sloane et al., 2020). Following design justice and disability justice principles, AI development needs to be led by those that are most impacted and/or most likely to be harmed, according to disability rights organization Sins Invalid (Costanza-Chock, 2020).
Fifth, social relationships matter a great deal. They are often running in the background holding everything together when systems fail, and we can use design to sustain and strengthen those relationships.
For example, once when the plastic battery cap on my insulin pump broke, rendering the device’s software useless, the solution was not a DIY fix (I tried super glue) or waiting several days for the company to send a new one by Fedex but, rather, the fact that a diabetes trainer that I had worked with 3 years before recognized my name and happened to have an extra, which he delivered to me personally in less than two hours.
Another time, when I chose the wrong health insurance and (almost) couldn’t afford to purchase my insulin for the remaining 6 months of the year, Type 1 Diabetes Twitter came to the rescue, and within a week an acquaintance that I’ve never met left me six bottles of insulin at a hotel front desk—a powerful gesture of mutual aid and solidarity.3
In her new book, The Double Bind of Disability, Rebecca Monteleone (2025) describes the ways in which disabled people, and particularly those using medical technologies that are sold based on a “ rhetoric of individual autonomy, independence, and consumer choice” (p. 5), are trapped in a “double bind,” one which requires personal accountability “for managing disability through its prevention, treatment, or cure” while “their experiential and embodied knowledge is questioned or dismissed.” Monteleone sees this as “evidence of a strengthening moral regime that demands both responsibility and compliance” (p. 7).
Within this context of self-management, fostering collectivity itself can be understood as a radical act that is working in opposition to individualistic and capitalist systems. That is why, in spring 2024, after many years of isolation, I started a group called T1D/sign for scholars, designers, and artists, which has now grown to over 30 members across five countries. We meet on a monthly basis on Zoom and just completed our first collaboration: a workshop at an upcoming conference.
Sixth, while manufacturers continually offer techno-optimistic promises, I’ve learned that waiting for “the next version” of the system is not a solution for me or for the planet. Philosopher and disability scholar Ashley Shew (2023) suggests that, instead, we might view these promises through a lens of “technoableism,” meaning that companies often make things that disabled people do not want or need. One meme that was circulated in a type 1 diabetes group on Facebook this summer, after the introduction of the Type 1D Barbie Doll by Mattel, depicted Barbie holding a sign saying “Sensor Error.” This was very funny in an ironic way.
The problems with various systems may look a bit different, but they persist, and, in fact, they will never disappear because failure is the norm, whether living with humans or with machines. Yet, for liability reasons, we spend a lot of time trying to figure out who to blame when things go wrong. And, in addition, I’m now throwing away an even bigger hunk of plastic every 10 days.
Lastly, binary logic might work for machines, but it certainly is not fit for humans. Medical technologies for managing type 1 diabetes promise freedom, knowledge, and transparency regarding one’s blood sugar, but what happens instead is our bodies become “closed worlds” [to, once again, reference Edwards (1997)] in which data and information is all that matters and threats, risks, and attacks become (almost) self-imposed. Rather than certainty, type 1 diabetics live with constant uncertainty as we navigate abstractions and live in between these “proxies.”
In the Data Performance script that I presented at the beginning of this paper, I frequently refer to the “Alert on Low” alarm. According to the machine, a reading of 80 is “normal” but a reading of 79 is too low. For me, on one side of this boundary, perhaps I am in a beautiful dream (though, in my case, it is most likely a recurring dream about losing things) while, on the other side, are several hours of interrupted sleep and a day as a “T1D zombie.” But, don’t just take my word for it. You can observe many such things in your own everyday life with machines if you know what to look for—that is, if you have developed what Anna Tsing (2015) calls the “arts of noticing.”
So, where do we go from here in the design field? Should we think about algorithmic error as a problem to be solved? Is AI a polycrisis in itself? Is it part of a post-normal condition in which there is no “normal” to go back to? Or should we think about algorithmic error as a new question? I myself see algorithmic error as an opening for new design approaches and creative practices.
Disabled “Making and Doing”
Recently, there have been several major museum exhibitions about disabled artmaking practices, including at the Victoria & Albert Museum and the San Diego Museum of Art.4 My own response to my experience living with machines has been to move away from rational arguments and explanations and rather towards more performative, visceral, somatic, embodied, and experiential modes, both as a way of storytelling about my own experience and as a way of understanding myself, my data, and my machines. This has resulted in the 4.5-minute Data Performance, a small sample of which was presented at the beginning of this essay, which was delivered in person as part of my IASDR keynote talk (Forlano & Hickman, 2024).
In addition to the Data Performance, textile artist Sasha de Koninck and I have been working on a Data Physicalization of the same dataset in the form of a blue and orange digitally knit blanket complete with the text “Calibrate Now” and “Alert on Low” (Offenhuber, 2020). The blanket tells the story of the ways in which it is impossible to truly separate human and machine labor. These creative projects have allowed me to think about what I call “data demons,” described in a new article in Leonardo, which are ways of working with data and computation that disrupt and disorient, exaggerate and complicate, confuse and corrupt (Forlano & Barrio, 2024).
What if we take a “critical data studies” approach to understanding AI, as my co-author and I do in the Cyborg book (Forlano & Glabau, 2024). Such an approach would emphasize the following:
DisAIbled
MAIde
SituAIted
ContextuAI
InfrastructurAI
LocAl
LAIbor
MateriAI
TemporAI
CommunAI
BiAIsed
In Closing
In this essay, I draw on autoethnographic vignettes, transcribed machine data, and critical/creative making practices as evidence of the social consequences of living with machines in order to call for the integration of new approaches in the field of design, such as critical data studies. The field of design has an openness that allows for the integration of many concepts and approaches, some of which are often in tension with one another. Given the current debates around the role and importance of AI, design can benefit significantly from reflecting on the social consequences of the things that we have already designed. While to some, the space of medical devices may seem quite niche—subject to different expectations, rules, and regulations—I believe that these devices and the ways in which they are already embedded in the world and in people’s lives can more accurately be understood as wise “canaries in the coal mine,” harbingers of assumptions, values, priorities, and norms that have already been put into practice.
I began writing about my insulin pump and sensor system in 2015, about two years after I started using them. When these systems “landed on me” so to speak, I knew that I had a responsibility as a researcher to document my experience living with them. My prior experience thinking about these questions allowed me to focus my attention and awareness on certain aspects of their functioning/malfunctioning, on the economics and politics around their design, and on the aesthetic and affective aspects of the experience. My specific point of view is conceptually shaped by the fact that I spent 20 years studying other kinds of socio-technical systems, with attention to their materiality, locality, embodiment, context, and situatedness, drawing on science and technology studies and, in particular, feminist technoscience. In 2018, the manufacturer introduced a “smart” system that could dynamically adjust insulin delivery, a development that introduced a series of new questions and urgencies. My experience with this system informed the basis for this essay and the critical/creative making practices that have translated my experience into performative and material forms.
In the future, the design field will continue to be multidisciplinary (or transdisciplinary), but it will also be even more multimodal (including sound, smell, and touch) and multicultural. The question facing the design field is not what AI will become, and especially not what corporations want AI to become; rather, it is what we as humans want to become with technology.
What might it mean to “crip AI” in the field of design—or to view every design project from the perspective of disabled people? (Williams et al., 2021). How can we both live with and live well with machines? How might we expand our understanding of humanity in order to offer alternative ways of imagining “AI otherwise,” ways of considering more deeply what it is for and for whom?
In closing, to return once again to my own experience with the intrusive alarms that kept me up at night for four years: Sensors and humans make very strange bed fellows. Someday, in the near future, I’ll crawl into bed and cozy up, wrapped in a deep blue blanket knit from my data, and drift off to sleep. If we cannot sleep, we cannot dream!
Acknowledgements
I was truly honored to present this keynote talk at the IASDR conference in Taipei, Taiwan, in December 2025. I would like to thank Eliza Van Cort for her collaboration and coaching on the “Data Performance,” without whom the performance would have been impossible. Thanks to all of the hosts and organizers and, in particular, Lin-Lin Chen. I would also like to thank the many colleagues, friends, and students that I have had the opportunity to work with over the past several years, which has allowed me to continue developing this work as part of invited talks, workshops, and panels. These include: the Critical Futures Lab (January 2023-present), Northeastern University, Boston, MA; the T1D/sign Collective (March 2024-present); Katrina Jungnickel, Goldsmiths University, London, UK, as part of the Politics of Patents Festival (POPFest); Kristina Lindström and Tina Askanius, Center for Imagining and Co-Creating Futures, Malmo University, Malmo, Sweden; Åsa Ståhl, Gothenburg University; Li Jönsson, Malmo University, Malmo, Sweden; Virginia Marano, “CripTech Creativity: Rethinking Access through Art and Technology,” Florence, Italy; Alexandra Toland and Yvon Bonenfant, “Octopus Methodologies” Workshop, Bauhaus Weimar PhD Spring School, Bauhaus, Germany; Marisa Cohn and Vasiliki Tsaknaki, ETHOS Lab, ITU Denmark, Copenhagen, Denmark; Anna Vallgårda and Jonas Fritsch, TRACE Project, ITU Denmark, Copenhagen, Denmark; Daniela Rosner, University of Washington, Seattle, Washington; Sara Colombo, Feminist Generative AI Lab, TU Delft, Delft, The Netherlands; Maria Luce Lupetti, Politecnico di Torino, Torino, Italy; Oscar Tomico, TU Eindhoven, Eindhoven, The Netherlands; Madeline Balam, Airi Lampinen, and JooYoung Park, FemTech Summer School, KTH, Stockholm, Sweden; Catherine Wieczorek, Georgia Institute of Technology; Irem Tekogul and Sasha de Koninck, Northeastern University; Azra Sungu; Jennifer Shin, Institute of Design, Illinois Institute of Technology, Chicago, Illinois; Paula Martin Rivero, Northeastern University; Generic Error; Beatriz Vincenzi and Nadia Campo Woytuk, “Bring Your Own Biodata” workshop, DIS 2025, Madeira, Portugal; Mafalda Gamboa, Audrey Desjardins, Sarah Homewood, Claudia Núñez Pacheco, and Karey Helms, “Many Samples of One” workshop, DIS 2024, Copenhagen, Denmark; Ann Light, Sussex University, Sussex, UK; Anna Brynskov, ITU Denmark, Copenhagen, Denmark; Grace Turtle, TU Delft, Delft, The Netherlands; Romany Dear; Andreu Belsunces, University of Catalunya, Barcelona, Spain; Nerea Calvillo, Warwick University; Grisha Coleman, Massachusetts Institute of Technology, Boston, MA; Elisa Giaccardi, Politecnico di Milano, Milan, Italy; Danya Glabau, New York University, New York, NY; and, Martin Tironi, Pontificia Universidad Católica de Chile, Santiago, Chile.
Endnotes
- 1. The script below was presented as a Data Performance in the conference presentation of this paper. In the script, I have added notations in brackets to communicate the visceral experience of certain words or sentences. These were added to indicate how I should read the script, i.e., in a synthetic voice reminiscent of an enthusiastic AI agent. In the script, bold means that I emphasize the text by enunciating and, at times, speaking more loudly. Italics means that I deliver the text in my own voice. These alerts and alarms were transcribed in July 2019 exactly as they appeared in the medical device’s Alarm History, and they were augmented with short autoethnographic field notes.
- 2. With sincere thanks to Anna Vallgårda for introducing me to this term as a way of thinking about harm.
- 3. In this essay, I use autoethnographic vignettes such as this one to situate my key points. These vignettes were taken as field notes between 2018 and 2025. They were sometimes written as short one-sentence e-mails to myself after being woken up in the middle of the night by the insulin pump and sensor system. I then elaborated on them the following day and/or the next time I sat down to write about my experience.
- 4. See https://www.vam.ac.uk/exhibitions/design-and-disability and https://mcasd.org/exhibitions/for-dear-life-art-medicine-and-disability. Accessed on February 23, 2026.
References
- Ananny, M. (2022). Seeing like an algorithmic error: What are algorithmic mistakes, why do they matter, how might they be public problems? Yale Journal of Law & Technology, 24, 342-364.
- Costanza-Chock, S. (2020). Design justice. MIT Press.
- Crary, J. (2013). 24/7: Late capitalism and the ends of sleep. Verso.
- Edwards, P. N. (1997). The closed world: Computers and the politics of discourse in cold war America. MIT Press.
- Forlano, Laura. (2017). Data rituals in intimate infrastructures: Crip time and the disabled cyborg body as an epistemic site of feminist science. Catalyst: Feminism, Theory, Technoscience, 3(2), 1-28. https://doi.org/10.28968/cftt.v3i2.28843
- Forlano, L. (2020). The danger of intimate algorithms. Public Books.
- Forlano, L. (2023). Living intimately with machines: Can AI be disabled? Interactions, 30(1), 24-29. https://doi.org/10.1145/3572808
- Forlano, L., & Barrio, I. (2024). From data doubles to data demons: Reflections on a criptech collaboration. Leonardo, 57(2), 132-140. https://doi.org/10.1162/leon_a_02488
- Forlano, L., & Glabau, D. (2024). Cyborg. MIT Press.
- Forlano, L., & Hickman, L. (2024). Day 27: A data performance. In C. Herrmann, E. M. Hunchuck, & M. I. Ganesh (Eds.), The AI anarchies book (pp. 116-121). Akademie der Künste.
- Halpern, O., LeCavalier, J., Calvillo, N., & Pietsch, W. (2013). Test-bed urbanism. Public Culture, 25(270), 272-306. https://doi.org/10.1215/08992363-2020602
- Kafer, A. (2013). Feminist, queer, crip. Indiana University Press.
- Monteleone, R. (2025). The double bind of disability: How medical technology shapes bodily authority. University of Minnesota Press.
- Offenhuber, D. (2020). What we talk about when we talk about data physicality. IEEE Computer Graphics and Applications, 40(6), 25-37. https://doi.org/10.1109/MCG.2020.3024146
- Piepzna-Samarasinha, L. L. (2022). The future is disabled: Prophecies, love notes and mourning songs. Arsenal Pulp Press.
- Seaver, Nick. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2). https://doi.org/10.1177/20539517177381
- Shew, A. (2023). Against technoableism: Rethinking who needs improvement. Norton & Company.
- Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2020). Participation is not a design fix for machine learning. In Proceedings of the 2nd ACM conference on equity and access in algorithms, mechanisms, and optimization (Article 1). ACM. https://doi.org/10.1145/3551624.3555285
- Tsing, A. L. (2015). The mushroom at the end of the world: On the possibility of life in capitalist ruins. Princeton University Press.
- Williams, R. M., Ringland, K., Gibson, A., Mandala, M., Maibaum, A., & Guerreiro, T. (2021). Articulations toward a crip HCI. Interactions, 28(3), 28-37. https://doi.org/10.1145/345845
- Willis, A.-M. (2006). Ontological designing. Design Philosophy Papers, 4(2), 69-92. https://doi.org/10.2752/144871306X13966268131514
- Wolf-Meyer, M. J. (2012). The slumbering masses: Sleep, medicine, and modern American life. University of Minnesota Press.

