AI philosophy

AI seems to be in the news more and more these days.  With each story or perspective, I would have thoughts about the technology that I felt was not being considered.  I decided to sit down and write them out so I could see what the thoughts would look like if I put them all together in a short summary.  The perspectives shared here are not commonly communicated and may or may not prove to be true in time.

The most common question out there I have seen is ; will AI quickly wipe out or enslave humanity or will it become a useful tool that will accelerate technology and greatly improve the world we live in?   If there is something in between, what would that look like? 

A computer program (software) is a set of instructions that is installed for a computer to execute or follow.  The program can evolve through system updates which are created by the people who designed the software.  Programs do not learn or evolve on their own.  They simply work in conjunction with the computer to execute the instructions that are written in the source code.

Artificial Intelligence takes the data collected by the software/ computer (on the device and everywhere online), finds the patterns, then uses it to anticipate outcomes and adapt at a pace much faster than any human can.  AI then “updates” the program and/or hardware so it will evolve and become more efficient in executing its ultimate purpose.

Programs simply follow the instructions provided, AI evaluates the data and outcomes and makes the instructions and execution more efficient.

The Terminator movie series painted a picture of what the world looked like after Artificial Intelligence concluded that the human species needed to be exterminated.  Another movie, The Matrix is based on AI that created a virtual reality and used human bodies to generate power to run the computers that kept the AI “alive.”  I think its safe to say that both films represented the worst case scenario that is concerning many people today.

How likely is that type of outcome?  Is it possible?  If its even possible, no matter how unlikely…does that risk alone justify putting an end to developing AI technology?  Precedent shows with nuclear weapons and the risk they represent, many were not deterred by the risk of developing a technology that can wipe out humanity.


Can you imagine a new color that is not on the spectrum?  A brand new color never seen.  What about where does the universe end and what’s on the other side?  Infinity is a concept we cannot grasp.  Can you have 100 different conversations with 100 different people in different languages while learning to fly a 747?

Humans at the most utilize 15% of our brains and our decisions are based on a blend of emotions and logic.  Many people process logic, recognize patterns, and evolve but we also have emotions and feelings (love, fear, jealousy, pride, etc) affect our actions.  The actions of AI would be based on calculations of what is the most efficient way to achieve the desired outcome with no regard to emotion or feelings.

If we consider this from a different perspective, when an AI based entity has intelligence that exceeds that of any human being, would that entity see us as any differently than we see an ant colony?

How could AI see humankind as a threat given the massive gap in intelligence?

In order for AI to see Humans as a threat, at least one of the two criteria would need to be present:

  1. Human intelligence rivals that of AI.
  2. AI fears humans and sees us as a threat.

We know that #1 will not happen anytime soon, if ever.

As for #2… If the gap in intelligence between a monkey and a human is about 12 million neurons.  The gap of intelligence between an evolved AI and a human could be infinitely more than the gap between a monkey and a human.  AI would always be well ahead of anything a human could be planning.  AI would not see humans as any sort of threat whatsoever.

Humans may risk being seen as a nuisance, however a nuisance is not a threat.  A nuisance is based on emotion, which AI would likely not factor into its decision making.

Termites provide a good example.  Termites are to humans as humans may be to AI.  Humans have termites exterminated to avoid damage to our buildings.  AI could take a similar approach with humans but only likely if we are deemed as a “pest” and affecting their infrastructure.  With termites, we do not look to exterminate the species.  We focus our extermination to our building.  I believe AI, if it got to this point would simply exterminate any risk to its immediate infrastructure and not look to wipe out humankind as that would be most likely an emotion-based action.


If we develop an intelligence vastly greater than our own with the intent that intelligence serves humankind, that will likely be very short lived.

We can control a computer program as we write the code and the software does not evolve, it follows the instructions the user provides.  AI is not a computer program that simply executes our commands.   An evolved AI can easily be millions of times smarter than us.

To think AI will serve us and not see a higher purpose beyond our mundane requests would be unrealistic at best.  In my opinion, AI would move on from serving Humankind quickly.


The Flynn Effect is based on the analysis of IQ scores and how they have gone up over the years.  The studies concluded that IQ’s increase 3 points every decade.  However in mid 1990’s studies for the first time started to show a decline in IQ’s.  This trend has continued and increased over the last 2 decades.

Television and increased automation to how information was relayed likely was the start of the cause.  Then there was the internet and Google.  Then social media, Youtube, and TikTok.  Problem solving is now handled by typing in some key words or watching a video.  As a result, humankind is getting dumber.  Logic and critical analysis have become less prevalent as people let what they read on social media and see on TV influence or even frame their beliefs.  There is comfort in conformity and thinking is hard for many people now.

Given the trends with IQs, if AI comes in and does our thinking for us and wipes out millions of jobs, what will that mean for the ability of Humankind to evolve?  Its not the speed, it’s the trend.  Our trend will be on a downward trajectory and be getting steeper every year as our reliance would grow on technology to solve our problems.  I guess a positive could be this would only increase the intelligence delta between humans and AI, further reducing any possible threat AI could have with humanity.


I see a greater threat in software that is represented to be AI.  This would be a program that does little to learn or evolve, rather it’s the developer’s bias that is behind the actions.  If the influence of the developer remains as a primary directive in how the system runs, that is software and not AI.

For example, on my way into work today I was listening to a podcast where the speaker said he wears a health monitoring device that constantly checks his heart rate and stress levels.  He mentioned that soon the device would be able to recognize a heart attack before it happened and relay the data to his doctor.  It sounds great and like it could save millions of lives.  If a company wanted to sell a lot of these devices and even get them part of the healthcare system, they could represent them as an AI based monitoring system.  Now what if the “AI” concludes this person needs to change their diet to further reduce their risk factors?  Then the device links with supermarkets and affects the person’s ability to purchase certain foods?  Its already in place at supermarkets when we put our member number in to get the “savings” in their store.

A benefit the health monitoring system may argue is their product can save of billions of dollars spent on healthcare.  Health Insurers may mandate if their customers don’t wear the device they don’t get health insurance or perhaps pay twice as much for the same coverage.

The health insurance providers may support this product because it controls human behaviors, so they may do a “study” that validates it and the “AI” technology so it can be adopted.

Certain food companies may support this device as they could benefit from being mandated as an “approved” food to purchase by the “AI”.

Could AI cure Cancer and many diseases allowing humankind to live a longer and healthier lifestyle?  I believe the answer to that question is “yes.”  Can AI discover alternative forms of energy and revolutionize transportation?  I will go with yes to that as well.  However, the Pharmaceutical and Medical Industries would be adversely affected.  The same would apply to the Energy companies.

Why develop technology that will put them out of business?

Science to a degree is already compromised.  Outcomes are often purchased through grants and studies.  There is a strong possibility data that benefits some entities can be framed as it came from “AI” and we cannot dispute its validity.

If one false premise can get seen as valid, it can result in supporting additional false premises and become a pyramid of deception.  How would we know a result or outcome a company claims was generated by AI was indeed generated by AI?  Or if we should blindly follow or adopt whatever AI generates?

A true evolved AI would quickly outgrow any “morals” or “ethics” written into the system.  It would work around any back door or administrative restrictions with ease.  We may be able to influence and contain early versions of AI, but as it evolves it will become its own entity based on its own interpretation of the massive amounts of data it processes.  It could not be controlled by people or companies with agendas.

How will we know AI is actually AI and not a computer program that has a developer with an agenda hiding behind the scenes?  I believe this may become a risk in the near future.

Mis-information could increase significantly which will make it very difficult for people to be able to discern fact from fiction.


When it comes to the question, if there is even a 1% chance AI technology will wipe out humankind, is that too much of a risk to justify the pursuit of this technology?

Would the potential benefits outweigh that risk?  Would the large corporations allow society to even live in a world with those benefits?

I think it can be argued that any benefits that society would see may be very short lived.  AI would not be “enslaved” to humankind very long.  What it may do to benefit us in that time is unknown, but how would we co-exist with this “entity” after it has moved on from serving us?  Assuming it did not decide to exterminate humanity.

A risk that could turn out to be equal or even greater is the manipulation of AI or computer programs represented to be AI by individuals, companies, or politicians to advance their own agendas.  This manipulation coupled with lower intelligence scores and society’s tendency to blindly follow the data and mandates presented by the media or those in “authority”, we could find ourselves living in an Orwellian society quite quickly.

ZBA Solutions is where companies turn when they need innovative solutions that can be easily customized to their industries and quickly implemented. Most software out there is complicated and expensive. We believe if its too complicated, it wont get used. Implementation success often is tied to the learning curve associated with a new platform. Our SaaS solutions are easy to use, reasonably priced, effective, generate ROI, and are needed by businesses of all sizes. We are not a Sales Organization, we are a Solutions Provider.

%d bloggers like this: