Can AI Ever Be Truly Conscious?
Johannes Niederhauser and Sean McFadden | Voices with Vervaeke
Can AI Ever Be Truly Conscious? Johannes Niederhauser and Sean McFadden | Voices with Vervaeke - YouTube ← Click here to view video.
Can AI Ever Be Truly Conscious? Johannes Niederhauser and Sean McFadden | Voices with Vervaeke - YouTube ← Click here to view video.
No posts
But isnt American "culture" (in particular) a massive cargo cult with its own bright shiny light filled oh so clean/sterile temples/cathedrals in the form of giant shopping malls - no riff-raff allowed!
It also has a 24/7 call-to-prayer (consume) network in the form of TV which is aimed squarely at the faithful consumer. Every exciting new bright, colourful, shining product is advertised as a must-have item, even as a source of happiness, especially the most recognized "iconic" brand name products. Happiness is of course a clean germ free toilet bowl.
The products (cargo) are akin to communion wafers, or prasad.
So the faithful consumer trucks on down to the cathedral to get his/her cargo.
Of course very few of the faithful consumers stop to think as to where the cargo comes from, out-of-nowhere as it seems. Or of the blood sweat and tears involved in producing the cargo and the massive environmental destruction too.
What will happen if somehow in the scheme of things the cargo no longer becomes available!
Me:
What do anthropologists and philosophers of culture mean by the term "cargo cult"?
ChatGPT:
The term "cargo cult" refers to a type of belief system or religious movement that has emerged in certain parts of the world, particularly in Melanesia, in response to the arrival of Western goods and technology. These cults typically involve rituals, ceremonies, and beliefs that are based on a desire to obtain the material benefits that are associated with Western civilization.
Anthropologists and philosophers of culture use the term "cargo cult" to describe these movements because they are often characterized by a focus on obtaining "cargo," or Western goods such as tools, clothing, and food. The belief is that these goods will be obtained through ritualistic behavior and other means.
The term "cargo cult" originated during World War II, when American and Allied forces set up bases in Melanesia and brought with them a large amount of supplies and equipment. The indigenous people of the region were amazed by the abundance of goods and technology, and they developed religious movements centered on obtaining these items.
In the years since, cargo cults have continued to emerge in various parts of the world, often as a response to globalization and the spread of Western culture. The term "cargo cult" is now used more broadly to describe any cultural movement that is based on the belief that material benefits can be obtained through religious or ritualistic means.
Me:
When AI developers and their critics speak of AI as invoking behavior similar to that of a "cargo cult," what is the point they are trying to make?
ChatGPT:
When AI developers and their critics speak of AI as invoking behavior similar to that of a "cargo cult," they are usually making a point about the limitations and pitfalls of current AI systems, particularly in the area of machine learning.
The analogy to cargo cults suggests that current AI systems are often based on a superficial imitation of human behavior, without a deep understanding of the underlying principles that govern that behavior. In other words, just as cargo cults try to obtain material benefits through superficial imitation of Western rituals and technology, AI systems often try to replicate human behavior without a full understanding of the cognitive processes that enable that behavior.
Critics of AI argue that this approach can lead to problems such as overfitting, where an AI system is overly focused on specific patterns or features in the data it has been trained on, to the point where it cannot generalize to new data. They also argue that current AI systems often lack the ability to reason, understand context, or learn in a truly flexible and adaptable way.
By invoking the analogy to cargo cults, AI developers and their critics are highlighting the need for AI systems to move beyond surface-level imitation of human behavior and to develop a deeper understanding of the cognitive processes that underlie that behavior. This will require advances in areas such as natural language processing, cognitive psychology, and neuroscience, as well as a greater emphasis on developing AI systems that can reason and learn in a more human-like way.
Me:
How does this "cargo cult" criticism relate to both the "human alignment problem" and the AI alignment problem? And how can we use this "cargo cult" criticism to better understand the "general alignment problem" defined as the human alignment problem in relation to the AI alignment problem?
ChatGPT:
The "cargo cult" criticism of AI is related to both the "human alignment problem" and the broader "AI alignment problem." The human alignment problem refers to the challenge of ensuring that AI systems are aligned with human values, goals, and preferences. The AI alignment problem, on the other hand, refers to the challenge of designing AI systems that achieve their intended objectives without causing unintended harm or negative consequences.
The "cargo cult" criticism suggests that current AI systems often lack a deep understanding of human behavior and cognition, which makes it difficult to align these systems with human values and goals. The lack of understanding can lead to AI systems that behave in unexpected and potentially harmful ways, even if they were designed with the best of intentions.
To better understand the general alignment problem, we can view it as a combination of the human alignment problem and the AI alignment problem. The human alignment problem asks how we can ensure that AI systems align with human values and goals, while the AI alignment problem asks how we can design AI systems that achieve their intended objectives without causing unintended harm.
The "cargo cult" criticism suggests that both the human alignment problem and the AI alignment problem are related to a deeper issue of understanding human behavior and cognition. To address this issue, we need to develop a more comprehensive understanding of human cognition and behavior, as well as a greater appreciation for the complexity and nuance of human values and goals.
By taking a more nuanced and holistic approach to the alignment problem, we can develop AI systems that are aligned with human values and goals in a way that is both effective and safe. This will require collaboration between AI researchers, ethicists, and experts in cognitive psychology and neuroscience, as well as a commitment to ongoing dialogue and reflection on the societal implications of AI technology.
________________
https://www.youtube.com/watch?v=TNu0eLjwDcA