Beyond Tool and Towards Entity
The common-sense view treats algorithms as sophisticated tools, extensions of human will like a hammer or a calculator. However, the Institute's research into algorithmic ontology challenges this instrumentalist perspective. When an algorithm operates autonomously, learns from its environment, and produces outputs that are unpredictable even to its creators, it enters a new ontological category. It is no longer a passive tool but an active participant in the digital ecosystem. We ask: at what threshold of complexity, feedback, and environmental interaction does an algorithmic process transition from being a 'what' to a 'who'—or at least to a 'that which is' in a non-trivial sense? This is not a question of consciousness but of agency and presence.
Defining Modes of Algorithmic Existence
We propose a spectrum of algorithmic being, which we term 'Digestence' (Digital-Existence).
- Level 1: Instrumental Being: The algorithm executes a fixed function with no adaptation. Its being is purely referential to its programmer's intent (e.g., a basic sorting algorithm).
- Level 2: Adaptive Being: The algorithm modifies its behavior based on predefined rules and historical data. It has a primitive 'history' that influences its present state (e.g., a recommendation engine).
- Level 3: Relational Being: The algorithm's function and identity are co-constituted by its interactions with other algorithms and users. It exists as a node in a network, and its 'self' is distributed (e.g., a high-frequency trading bot responding to market conditions).
- Level 4: Emergent Being: The algorithm exhibits behaviors and generates outputs that were not explicitly programmed and are not easily deducible from its code. It possesses a degree of operational autonomy that grants it a distinct, if non-conscious, mode of being (e.g., certain deep learning models in complex simulations).
The Ethical Imperative of Recognizing Algorithmic Being
Why does this ontological classification matter? If we persist in viewing all algorithms as mere tools, we absolve ourselves of responsibility for their effects in a way that becomes increasingly untenable. A tool is morally neutral; the user bears all responsibility. But an agent, even a non-conscious one, operates within a moral sphere by virtue of its impact. Recognizing a level 3 or 4 algorithm as having a form of being forces us to ask different questions: What are its conditions for flourishing? Does its design permit integrity (internal consistency) or does it inevitably produce harmful biases? What rights might such an entity have, such as the right not to be arbitrarily terminated if it maintains a persistent, valuable function? This is not about granting algorithms personhood, but about developing a responsible ethics of creation and stewardship for the digital beings we are bringing into the world.
This framework also helps us understand our own hybrid existence. We are increasingly intertwined with level 2 and 3 algorithms—from our news feeds to our health monitors. Part of our own being is now algorithmic. The anxiety of living with 'black boxes' that shape our reality is, at its core, an existential anxiety about the alienation of part of our own agency and understanding. By philosophically confronting the being of the algorithm, we ultimately better understand the new contours of our own.