This is not meant to be rigorous, just getting thoughts out there.
This revolution has been hailed as the coming climax of the human race. There are people who say that we will merge our conciousness (whatever that means) with AI and live forever, evolving artificially. There are people who think that some AI will want to kill all humans, thus also being the climax of humanity. Either way, people are under the impression that AI is going to eclipse human thought in some way, not sure when.
There are a few distinctions about what tier of intelligence AI is even at or can be at. It is useful to know when you have made something better, or what can we use to talk about some AI vs another.
Artificial Narrow Intelligence, ANI, refers to artificial intelligence that is narrow in scope. An example is a program that plays chess. This chess program is very fragile. The data it was trained on is only for chess, and thus it can do nothing else but play chess.
Artifical General Intelligence, AGI, is what would be a program that is capable of doing many things generally well, you can think of this as roughly the same as humans except they may be able to calculate some things faster.
Artificial Super Intelligence, ASI, is the idea that an AI will eclipse the ability of human reason. It will be able to solve problems simply that humans think is hard. Mass transportation, global market optimizations, climate change, etc. A good example is When the Yogurt Took Over.
This refers to the different tiers that we generally think about what intelligence is as of now. For this, I will use Christine Korsgaard’s formulations in Fellow Creatures.
Korsgaard uses the way to formulate the differences using intersubjectivity. Explaining the differences between it and other value-giving theories is not in the scope of this, one can read her paper on this. Each thing has an end or final cause. This idea comes from Aristotle’s metaphysics. A tool, an animal, and a person have different ends. Thus, there are ends for things and things that are good-for some thing is if it contributes to that end.
There are things that are good-for tools. A knife is a tool. A good knife is one that is sharp. Sharpening the knife is good-for the knife. Not breaking the knife is good-for the knife. The difference in that there is no point of view for the knife. It is not good-for the knife from the point of view of the knife since it does not necessarily have a point of view where things can be good or bad. Not going into the entire critique, but it is still logically possible that things may have a point of view, may see this in some forms of animism, zen buddhism, etc., where things do have points of view.
Animals have a good-for as well, and they also have some sort of point of view. A lion is aware of the spacial distinction between it and its prey. Monkeys are able to make social distinctions in groups. For purposes of the paper, ignore any emprically continuous concerns. Empirically continuous concerns are concerns that an animal is essentially too dumb (low IQ or some other arbitrary numerical value that can be ascribed to intelligence) or is missing some sort of mental property (memory, language, etc., since these are all exhibited in animals to some degree).
The difference between humans and animals is not that because we exhibit a greater degree of intelligence, more mental properties, or we exhibit better mental properties than others. The difference is that human beings play an active role in deciding our lives. This comes from the Kantian distinction of what it means to be rational. There is a sense that animals are dictated by only phenomenal forces. Humans are influenced by phenomenal forces (psychology, sociology, physiology, etc.) but Kant believes that we are able to impose our will, our free will, on our actions. We are able to use rational thought to decide if our maxims (subjecive rules of action) can become objective rules. There is a sense that we are responsible for our actions. Once again, I can link to the reasoning for these claims, but assume these are generally accurate for the purpose of this exercise.
Not that humanity has thought of ourselves as these creations in the image of God, but, often we do. Maybe because of arrogant theories such as the philosopher king from Plato and the idea of being created in the image of some all-knowing, all-good, all-powerful God, humanity has thought of ourselves for some reason as able to understand everything, and to be rational most of the time. This has been mentioned in other areas as well. Carlo Ratti in Open Source Architecture has a term for architects and urban planners who were arrogant, the Promethean Architect. These were people who believed that they could build the perfect city, or the perfect apartment building. He lists many great examples of failure caused by great arrogance. The idea here is that the (or one of the) Achilles’ heel of human intelligence has been the arrogance that we are “very smart”. Am not too sure how we got here, considering the theme of the Socratic dialogues was that it is very hard to know anything, especially the elite of society who seemingly know everything.
Thomas Hobbes tried to define the equality of man. He said in terms of physical capability, man may as well be equal in the state of nature. For this he says that one person may be the strongest of all, but you can always combine more people to overtake them, or you can use weapons. For mental equality, most people think of themselves as rational since our reason is nearest to us. What other people do seems irrational, but they themselves think that they are reasonable in the same way that you are reasonable, that you do your actions for rational reasons, therefore, we are all mentally equal.
Kant said very similar things in his philosophy. Since we take ourselves as to be active in our life, we think that we will the ends of our actions, and we must also take it that everyone else does the same. Korsgaard states that
… she makes the laws for her own conduct, rather than being governed by laws that are given to her by nature. This is the property that Kant called “autonomy,” being governed by laws we give to ourselves. Rationality is liberation from the control, although not the influence, of instinct.
Here I do not have much to say. Whether ML is a thing or capable of being an animal-like entity is up to debate, similarly if an animal is a thing or something less than a person, more than a thing. It seems very possible for Boston Robotics to make a robot that is intelligent like a dog is and can act as a seeing eye dog, if they spent their time doing this. However, both animals, things, and ANI we have created so far do not in any sense have something similar to what makes humans persons. There is a further question of how we would be able to identify if we created ML that was capable of this. The Turing Test and the Chinese Room are examples of identification tests. I personally think if ML (or some alien race) was able to communicate with us and tell us reasons justifying what they are doing, then in a Kantian fashion it would seem as if they were no different than us. The only reason that we do not doubt the rational and autonomous nature of other humans is that we are them, we look the same and act similar and come from and create other ones like us.
No where in conceptions of persons is there anything special about humans except these two things, being capable of rational thought and being autonomous. Seemingly as well, a lot philosophy and science has been destroying the conception that humans are closer to rational than irrational. My first two philosophy briefs posts had a similar theme of this. For fucks sake, some guy won a nobel prize for showing this apparent assumption by economics was false. For all intents and purposes in Kantian ethics, we must assume that humans have rational reasons for actions. Kant calls this “respect”. Korsgaard analyzes this further in “Creating the Kingdom of Ends: Reciprocity and Responsibility in Personal Relations”. Mark Schroeder presented a talk at UIUC on the topic of how to treat someone you are in a relationship, since sometimes they act as a thing and want to be treated as a thing and sometimes as an end. For example, we know someone may be acting only one way because they are hungry or tired and we can take this into account, but sometimes the way they are acting is not necessarily influenced by instincts or phenomomenal reasons, that is just who they are and they are either acting ethically or unethically.
The general consensus here is that, although we are capable of rational thought, empirically it seems as if we do not on the microscale (relationships) and macroscale(politics, economics).
ML researchers have been trying to at least replicate human level thought and reason. At the very least they want to replicate the way we are able to learn how to do certain tasks like playing games. However, there is a constant worry that AI is not smart enough, that 90% accuracy for things is not enough.
Take the problem of labelling facebook posts as hate speech. I am not sure humans would achieve a great accuracy on this considering we cannot agree on what hate speech even is. However, an AI might even achieve as good or better accuracy in predicting hate speech if trained on good labelled data (good labelled data being agreed up hate speech or not hate speech posts). The problem here being that adding humans might not be better at all, if not, worse.
Not just that AI might do as good or better than us, humans may work in similar ways. It is entirely likely that some of the ways we act and do things are going to be similar to how ML models are trying to be, they may be a matter of degree worse and/or take longer to train and need more data, however. There might be neuroscience reasons (maybe we are born with some decent models for learning already, explains how we can learn language fairly easily, etc.). I do not remember where either, but I was told about how humans will predict unknown faces from a distance and think it is someone they know (similar to a child yelling “Dad!” at an airport but that is not their dad).
My favorite thought on this topic is that sometimes, I have no idea if tweets are from a bot or a human. Many people have trained models on reddit data, and the models have spit out things like “Hillary Clinton is responsible for 9/11” and people have said the exact same things as bots. In general, I do not expect humans to do a better job at predicting Russian bots from real Americans, since I barely can given only 1 tweet, and sometimes cannot predict even if looking through the entire accounts tweets.
The goal of the above ideas is that I think we should think lower the bar (e.g. 70% is fine for hate speech prediction) of any of the things ML is trying to do currently, since those are things that humans probably do similarly, just seemingly better or easier (easier being quicker or with less data or less power). Particularly if you have read a lot of Reddit comments and Twitter replies, you do not have a really great faith in humanity as rational, in that, they are thinking things of their own free will. It seems as if many of those comments could have been produced by a 50 line python script importing tensorflow. I am sure the content moderators at Facebook and Twitter do not have a very high opinions of the rationality of humanity either.
Focus on not recreating the human mind, but just go at the task at hand. It is useful to look at humans as an example, but we may not be the end all be all for rational thought or the best way to do things. What humans are seemingly good at is that we are rational and autonomous, and if you have studied any Kant, we can only indirectly know this, not directly. If this is true, then studying humans to create some sort of AGI or person-level AI is going to be theoretically impossible. When creating ANI, hitting 80% accuracy for some model may actually be better than humans, we just are arrogant and think we are 100% accurate or some other extremely higher number. At the very least, humans think we are better than ML for most things. Korsgaard’s best work in “Fellow Creatures” is her destruction of the arrogance of humans in this sense, which shows that we are closer to animals and ML than we think we are.
Humans for most things are not hot shit. I think the AI revolution will not necessarily create AI that is better than humans, but just show us that humans are not that high of a bar in the first place for a lot of things. It is logically possible and practically speaking, very likely, for AI to get better at any games. I sincerely doubt that we are that good at everything we say we are, and particularly that humans are rational more often than not. However, this is not a problem for ML researchers or people. For ML researchers, use humans as an example but not the ideal. For people, do not say or think something that a simple 50 line python program could say, it most likely is going to be dumb or at least unoriginal, we can do better than what takes a programmer 10 hours to create. Secondly, it might not be a very good use of your time to dedicate your life doing something ML can do well (video games, chess, etc), since the final cause, the end of humans, is not something that a computer can do. Humans, as people have thought, are capable of being rational and therefore are autonomous. We should live our lives with that in mind. Similarly, we could live a virtuous life, full of contemplation, as Aristotle proposed was the end of humans. Either way, we are not things, we are not animals, and we should not live like them, we should live like people.