top of page

Artificial Intelligence

The Artificial Intelligence module opened my eyes about many topics and concepts I had never been exposed to. When I was done reading and reviewing the module, I was scared. I was scared because I was worried about the existence of my precious human race in the future. I also wasn’t aware of the amount of Artificial Intelligence I currently use such as Siri on the iPhone, or automatic customer assistant representatives, and so forth. I also learned just how much of our human work AI (Artificial Intelligence) can replace us with, mostly in the industry of automation and agriculture.

​

Two things that the module touched on that were notable was Tranhumanism Movement and Technological Singularity. Transhumanism is something that I am very positive about. I believe if we can use technology and science to better our human conditions, not replace it, but better it, then we should go for it. Technological Singularity is the concept that I am uncomfortable with because it the fast advancement in technology due to the creation of AI. The advancement is so fast that it can result in unfathomable conditions for the human race. The word “unfathomable” is the uncomfortable part because I am more comfortable in a world that can be completely comprehended by the human race. This singularity would create an intelligence explosion and would create intelligence that would outdo human intelligence. The problem I see with this is that we always say, “knowledge is power”, so technically since the human race is the most intelligent and knowledgeable race (that we know of), then we are most powerful, which is good because we have control; but, if we are surpassed in that area (knowledge) will the human race be in control anymore? Or will AI run us? Will they treat us as nice as we treat our less knowledgeable pet cat or dog? We don’t know, but I assume form the perspective of someone apart of the human race, we certainly never want to get to the point. The module states that by 2099, organic human beings will be a small minority of the intelligent life forms on Earth. Luckily, I will not be around to experience such. The TED talk by Zeynep Tufekci explains the problems with AI. For example, they may discriminate without knowing they are discriminating. It can potentially be dangerous and I agree. AI’s don’t have morals like we human do.

​

I believe that possible strategies we can enact to make sure that our humanity is not lost because of technology is by making sure that technology only assists and facilitates the human race, not replace it. We should focus on how we can continue advancement of technology that can assist us, not replace us. Under the Ethical Implications tab in the module, it discusses The Laws of Robotics, in which I personally agree with because they all have to do with keeping the human race a priority. Microsoft’s CEO laid out the rules best, starting off by stating that AI must be designed to assist humanity and that we must understand how they operate. She listed some others, but those two were the most important as they highlight that AIs are not supposed to replace us, but assist us, and I believe anyone working on the advancement of AIs should keep that in mind. It should actually be a law put in place to protect humanity.

​

I believe AI will affect me personally by making my life easier. Siri googles things for me without me having to type anything or calls people for me without me having to do it. AI will make me lazier. If I have a machine that could do it for me, why do it? The problem I see with this is as explained in the Ted talk by Ken Jennings, humans will stop trying to learn to do things and know things and rely solely on machines. The problem with that is when we have conversation amongst people, we talk about subjects we are all familiar with, but if we stop learning to do things or knowing things because we could simply have it done for us or look up the answers, then what are we going to talk about? This is an extreme case, but I can see how it can get to this point. So, personally I feel that AI will affect me negatively and positively, but mostly negatively because it will make me a more lazy, less knowledgeable person if I allow it to.

​

It will affect me professionally because it could have the potential of replacing me. I currently do administrative work, which is routine work, which is one of the first jobs that if AI had to replace, will replace. So, currently more experienced or better employees can replace me, but soon, I can be replaced by machines too. That is a lot of competition to worry about. Additionally, what I like about the workplace is being able to collaborate and converse with people. If robots replace people in the office, it will make work a boring place to go. There will be no human interaction, which is something, I believe, a lot of humans love and need to be happy. Therefore, since human beings are usually less productive when they are sad, I know I will be less productive if I worked with solely machines, thereof, allowing my professional career to dwindle.

bottom of page