A computer is a device that can perform certain operations, logical or arithmetic, in an automatic way. For this, it requires programming, that is, a set of algorithms which it follows strictly. Computers are physical processors of information: they receive inputs and produce a response according to their programming. In the end, though, they are just physical objects, bound to physical rules; complex systems of wires, bolts, silicon plates, etc., arranged in a specific way so they can do a lot of marvelous things. The set-up of their parts is of course by design: if all of its components were randomly assembled, it would take an eternity for even a basic computer to be spontaneously form. There is no apparent mystery inside the computer: even terms that we use as if they were abstract entities, like “the Internet”, in the end just refer to networks of interconnected computers. We don’t generally think about it much, but information flows require a physical substrate: a massive, world-spanning array infrastructure.
Similarly, a biological nervous system is a physical processor of information. It’s composed of supremely intrincate biomolecular structures, which form units known as cells. Brain cells, by virtue of their layout and relationship to each other, hold the secret of cognition. We can observe the properties of the nervous systems they make up, but it’s very difficult to derive said properties from the individual characteristics of neurons. The magic of the nervous system, as with the computer, lies in the whole: the way different elements act as nodes, linked to each other, and continuously interacting. The nervous system’s role in an organism is signaling: like computers, they act as control systems of larger entities, like cats or a nuclear power plants.
Networks are sometimes used as a model for how societies operate. The study of conflict and war, makes ample use of network theory, and considers people nodes in a complicated web of interpersonal relations. Like in any other network, humans sometimes follow problematic courses of action in regards to others: this is the definition of internal conflict in a social network. Also, on occasion, networks engage in external conflict with other networks because their nodes, collectively, lead them to it (even if some nodes individually are not pushing for external conflict).
Violence in networks is the product of them becoming too rigid and failing to adapt to internal and/or external stressors. It can be conducive to a catastrophic breakdown of the network, the erosion of links and its dissolution. More frequently, however, violence holds a certain creative potential. The emergence of violence brings the reinforcement of some links and simultaneously weakens others. Thus, it creates new configurations, which eventually generate new identities. In an already politically convulse society, for instance, a high-profile death and the public’s response to it can lead to polarization and rioting, with subsequent identity formation and community building. Some interpersonal bonds become stronger, while others are broken. This reconfiguration changes the network’s behavior as a whole, and what it is capable of.
Some modern AIs have learning algorithms that act in a similar way, solving problems and identifying important information by looping inputs through layers of artificial neurons, amplifying important signals and dismissing weak ones. After being subject to training, neural networks can surpass human expert capacity in some tasks. Just like some social networks which, after being subject to enough stimuli of a kind, become extremely proficient in certain behaviors (like rioting, or consumerism).
As can be seen, networks are useful as a model because they are complex, yet at the same time are composed of simplified elements. Their key feature, though, are their emergent properties: a characteristic of complex systems in nature, qualities that are not present in individual components but which only manifest themselves in the aggregate. The brain exemplifies quite well this definition. A single neuron has a limited range of functions; basically, it just generates and conveys bioelectrical impulses. When it joins other neurons and forms a network, however, its possibilities increase dramatically. The connectome, that is, the impossibly complex map of the nervous system, is formed from all interneuronal connections; and from it arise the near endless possibilities of human cognition. The human connectome has not yet been completely map, but that of roundworm Caenorhabditis elegans, which is notably simpler, has.
In evolutionary biology, the term abiogenesis (the Origin of Life, literally) designates the natural process by which life arises from non-living matter, such as simple organic compounds. No details are known of how this came to be, but it is generally considered that life appeared gradually through evolutionary mechanisms involving molecular self-replication, self-assembly and autocatalysis. In the end, the problem of abiogenesis lies in the fact that life itself is not well defined. A very conventional definition of a living organism is that of a system that maintains homeostasis during a life cycle composed of birth, reproduction and death. Living beings usually undergo metabolism, grow, adapt to their environment, and interact with external stimuli. In other words, biologically, life is defined by what a living organism does. An organism, like a computer or a brain, is defined by its abilities.
A recurring theme starts to become identifiable: the relation between sufficiently complex, network-like physical objects and the things they are capable of doing, thanks to their emergent properties. As usual, under the guise of new words, old questions present themselves. Nihil novum sub solem: this is the same problem Descartes tried to solve when he pointed at the pineal gland as the seat of the soul. There is no doubt the subject is non-trivial. If life, violence, thought and computation are just functions, and those functions are an emergent property of complexity, then there must be a threshold for complexity that defines what is living and what is inert, and similarly, what is sentient and what isn’t.
Is abortion morally equivalent to infanticide? Is infanticide equally equivalent to homicide? If the argument for abortion is based on the non-personhood of the nasciturus, based on its degree of development, then infanticide should be less reprehensible than homicide. Following this line of thought, killing a dog might be arguably worse than killing a newborn baby. Many vegans forfeit eating animals because they have a nervous system, and thus are at least theoretically capable of suffering. (Interestingly, the fact that fetuses do have a nervous system doesn’t seem to matter to many abortionist vegans, hinting at the true place of abortion in modern Western culture; but I’m digressing).
Our roundworm friend, Caenorhabditis elegans, has about 302 neurons, and lobsters about 10,000; in comparison, modern supercomputers like IBM’s Blue Gene/P have more than 800,000 processors. One would think that the biological programming of a lobster is less complex than that of a computer. Can lobsters feel aversion of death? Can Blue Gene/P? Fear is not hardwired into computers by design. But a lot of unexpected, emergent errors occur in complex, man-made devices; errors which were not predicted by the engineers and which require constant updating and maintenance. Could there be ghosts in the machine? A computer might not seem sentient; but again, there is no way to externally prove sentience. For all we know, we might be inhabiting a purgatorium for locked-in automatons; a limbo for emergent, machinic souls trapped in silicon bodies. No meaningful, out-of-programming communication has ever been recorded between a machine and a person. But then, human-lobster meaningful interactions have also been historically scarce.