Technology looms large in the contemporary world, and powerful new tools to extract and generate texts using large language models (LLMs), like ChatGPT, have had a major impact across a broad spectrum of society. The products of these tools, and our interactions with them, reproduce our deep-rooted presumptions about the world and ourselves. In a Buddhist gaze, these phenomena can also reveal the tightly knit, relational nature of things. This essay will draw upon and extend a Yogācāra Buddhist–inspired analysis to reflect on LLMs’ place in the contemporary world.

An important feature of Yogācāra is a rich causal analysis of the entangled nature of inner and outer worlds. When this psycho-physical relationality of the world is masked, or constitutive elements are otherwise ignored or externalized, it can result in individual and systemic suffering. By paying attention to the complex of material and cultural conditions that contribute to technological production, we can acknowledge a profoundly intertwined human-technology relation, and become better positioned to align our technological development and usage in a way that serves human interests, as well as identify in more transparent and concrete ways whose interests data production and consumption serve. We might also be able to more honestly confront and unmask assumptions embedded in our languages and psyches, including false projections onto ourselves and our worlds, when we can see the entangled relations at play and acknowledge intricate causal histories. In Buddhist parlance, this is to acknowledge the role of karma in world-making.

Medieval Yogācāra texts can speak to the modern challenges of understanding and relating to AI in that they articulate an essence-less, intertwined structure of the world that includes intentional conscious processes (a.k.a. karma) as opposed to a world that is divinely ordered or driven purely by mindless materials. A world according to Yogācāra is not simply pregiven nor strictly projected, but takes shape in an interactive, co-creative process. This world arises from interacting physical and mental causes and is guided by interests. In fact, the primary factor that determines what counts as real in Yogācāra epistemology is interest. Interests motivate goal-directed behavior and guide the construction and maintenance of a lifeworld.

More generally, interests are rooted in subjective dimensions of value yet determine what counts as data. Data is neither pregiven nor objective, but interested subjects determine and shape what is considered relevant data from an indeterminate array of infinite possibilities. Significantly, after what counts as data has been determined by human interests, this preconfigured data is encoded into LLMs. Considering this through a Buddhist perspective on karma—that the worlds we inhabit are inextricably shaped and guided by interests and purposes—reminds us of this key, constitutive element in the production of data and its retrieval, as both material and mental worlds are deeply entangled together and are implicated in values. We should not forget that a key ingredient in realizing meaning in LLMs is the (human) interface. The cultural worlds of the (human) mind are extended and realized through LLMs, and these fundamental factors need to be acknowledged. Since LLMs are trained on language extracted from the internet, the outputs of LLMs continue to reflect and re-present the artifacts of human meanings and values.

With the meteoric rise of AI and powerful LLMs, minds might seem to be expendable and eliminable in the modern world, like the vanishing “God in the gaps.” Indeed, a functionalist interpretation of mind may be seen in parallel with the historical record of failed theologies, where appeals to God are later seen to have been placeholders for processes that had not yet been fully understood. Thus, the mind, too, might be thought to be whittled away by each new technological development, with chatbots conceived as independent meaning-makers, and CPUs positioned to replace the “ghosts in the machines” as well.

While we can welcome false ideas of mind sharing this fate in the wake of technological progress—falling away along with false gods—it is a mistake to presume that the mind itself is to be thrown out as well. Something significant is lost when the immanent, lived world is reduced to a material substrate and nihilist ideology (an underworld of physics or some other future, techno-utopia). Yet nihilistic materialism is not the only option or inevitable conclusion of a critique of false assumptions about the self, mind, and God. An alternative picture of the world, one that is irreducibly relational and oriented toward freedom, comes into view when considering Buddhism.

The Yogācāra Buddhist notion of the “foundational consciousness” (ālayavijñāna) can be a resource for thinking about the place of generative AI tools like LLMs in the contemporary world. The foundational consciousness is a central doctrine formulated in Yogācāra Buddhist texts to account for the continuity of habitual patterns and causal processes that perpetuate a life of suffering. As a repository of cumulative patterns of habituated actions that shape how we experience the world, it accounts for the development of the body, linguistic capacity, as well as self-identity. LLMs can be seen to play a role akin to the foundational consciousness in that they contain, reproduce, and reinforce patterns built upon traces of actions from the past. Also, echoing the reciprocal process of mind-world cocreation that we see in the functioning of the foundational consciousness, LLMs are shaped by human actors, while our interactions with the products of these technologies, in turn, shape us. Acknowledging the entangled history and relational constitution of this technology can help us understand these tools, and ourselves, better.

The intertwining of the material with the mental need not be understood as a collapse of differences—into some kind of singularity or mystical union—because the intertwining can be revealed in and through dynamic causal relations without fusing the manifold into one. Maintaining a meaningful space for differences and multiple nodes of agency, including biochemical interactions and sociocultural processes, is a way to avoid reductionist and unidirectional explanations that tacitly imply a monistic, transcendent substance (like “matter” or “mind”) or other singular essence or mechanism that ruptures the ecology of natural and cultural world(s). While distinguishable, neither cultural and natural worlds, nor the mental and the physical, are ever actually or completely separable. This intertwined relation, I contend, is important to acknowledge as we think about our relation to AI and technologies like LLMs.

While LLMs’ outputs are certainly powerful, we should not presume that they are value-free, as they do not present data innocently. Nor should we presume that the impersonal descriptions in science—even those of physics and chemistry—are or ever will be singularly independent such that human interests and personal values can be completely outsourced to them. Persons may be composed of impersonal things (like chemicals or molecules) or the five aggregates (like material components and feelings), but this does not entail that persons are entirely reducible to them. As opposed to a reductionist account that totally collapses human values into impersonal descriptions, or vice versa, a complementary vision, inspired by Yogācāra, can offer a way to frame a causal account of the natural world that underscores the irreducibility of the mind-world interaction, enabling deep layers of relationality to come into view.

Where naïve assumptions of a first-personal perspective are unreliable, third-personal descriptions serve as a useful corrective, even while an independent third-personal account (the God’s eye view) is nothing but a fictional idea. This is because the first-person (or subjectivity) is never fully extractable from the equation in an actual, lived world. In this light, the first and third person are inextricably paired together, all the way up and all the way down. By acknowledging this relational structure—and why human meanings and values cannot be completely reduced to or abstracted from nonhuman values—human concerns need neither be sacrificed, deferred, nor instrumentalized as merely a means to an end.

As we consider the place of the mind in a world that is adjusting to the growing presence of new technologies, Yogācāra can offer fresh insights and provide a framework for explicating this world as an intertwined network of the mental and material. Yogācāra challenges us to reconceive the physical and mental conditions of the world by highlighting the constitutive place of mind in the natural world as well as the cultural history of “matter” and material technologies. While Yogācāra texts may simply offer an outline of the contours of a mind-world relation in its current forms and evolving iterations, its robust account of relationality and clear articulation of the constitutive roles that interests and values play in the world can be useful to consider as we continue to fine-tune and critique our self-understanding and relation to emerging technologies.