Home    |

Learn    |

Listen    |
Read    |

Experience    |

|     Contact

The rapid advancements in AI, towards Artificial General Intelligence (AGI) at all costs, have profound implications for the rest of us. We are being disrupted, rendered irrelevant with every new feature, model and rollout. Combine this with the slow pace of legislative oversight, and lack of the vital “Why” behind these advances. The non-technical community needs to understand where the Tech Bros are taking us and decide whether this is the future that’s best for all of us.

We are moving fast and we are breaking things. It is full steam ahead towards AGI (Artificial General Intelligence).

What’s AGI? It’s that magical moment when our systems become smarter than us, able to disrupt every Human worker on earth, from boardroom to barista. Potentially representing an explosion of intelligence that could swiftly dwarf our own.

In the past weeks we have seen high-production, stadium-filling version launches from OpenAI, Google, Nvidia and Apple. Productions are slick and expensive; C-level executives are rock stars and superheroes, jumping out of airplanes, swinging from rafters and speaking in front of massive, curved screens.

They breathlessly run through the features and upgrades, using words like “insane”, “mind-boggling”. Computational power doubles, again. We query our agents by voice, and they talk back to us in near real-time. They see the world around them with digital eyes and can act on our behalf, if we let them.

Anyone who has started using AI in their workflows or lifeflows* – to write a report, respond to an email, illustrate a presentation or come up with a vacation itinerary – may have figured out that AI comes with its own culture. The way a system expresses itself and interacts with us is as much a result of its training data and process as the culture of the Humans who programmed it.

Beyond the bias and discrimination embedded in our data sets, Humans and AI alike grow up in a time and place that shapes how we communicate and behave. We go to school and receive training. We acquire life experiences. We all enter the world with a certain “baked in” point of view.

Biased data that guides AI can stop progress toward equality and even reverse it, if it corrupts our data sets and algorithms; because it’s AI, we think it’s more rational, reasonable and unemotional[i]. AI is our mirror; only as logical as the very Human data we feed it.

When OpenAI unveiled the GPT-4 Omni model, one of its features was a low-latency** voice-activated system. The new bot had a dynamic, friendly, flirty, voice that sounded remarkably similar to Scarlett Johannsen’s rendition of an AGI from the sci-fi movie “Her”. This was no accident. Sam Altman, OpenAI’s celebrated Tech Bro CEO tweeted the word “her” right after the launch[ii]. Now we know what all the Bros are working so hard to create; your own AI girlfriend right on your phone.

Did Sam Altman ever watch his favorite movie all the way to the end? The main Human character winds up heartbroken, destroyed and unable to have a normal relationship with a Human female after the experience. 

We are all rushing towards our future; more compute, Quantum supercomputers that will rocket-power the quest for AGI.

The Bros building the bots hail from a specific cultural perspective; northern Californian, white, young, male, well-educated, Western, atheist and tend towards Asperger’s on the neurodiversity spectrum.

The Brotherhood of Bros that make up today’s AI power structure are becoming some of the most powerful people in the world, able to raise billions for new endeavors or capture everything you do on your computer, moment by moment. These include Nvidia’s Jensen Huang, OpenAI’s Sam Altman, Google’s Sundar Pichai, Apple’s Tim Cook and Anthropic’s Dario Amodei.

Before an AI hits the market and becomes available to the rest of us, it needs to go to the equivalent of school. Typically, models are trained for between 5-7 months, by Humans, in a sealed off environment where it can safely expand its capabilities[iii].

Supervised learning teaches a model using labeled data; lots of examples of what it’s supposed to look for and right or wrong answers. An AI model will look at the patterns in these examples and learn how to predict the right answers for new things it hasn’t seen before[iv]. Much like the way a Human student might learn chemistry or biology.

Reinforcement learning teaches an AI system to make good decisions, based on experimenting with different actions, getting rewards by a Human trainer for good ones, and penalties for bad ones. Over time, a model will learn how to get more rewards than penalties[v].

What’s a “reward”, and what’s a “penalty”? Imagine training a system that powers a self-driving car. The learning brain will get a point for successfully staying in its lane or avoiding grandma crossing the street. It will lose a point if it runs a red light or hits a lamp post. The AI learns to streamline its actions to acquire more and more points. After a certain performance level is achieved, the AI is released into the wild.

AI systems are also trained to write, read and summarize, talk to us and give us fashion advice. The training is based on the cultural perspective of the set of Humans writing its program. Points are given for “cheerful”, “polite” or “helpful”.

While AI is hurtling toward its own future, the pace of legislation that could potentially contain and direct it, is painfully slow. By the time our lawmakers figure out what it is they’re dealing with and what the ramifications might be for our society, AI has shape-shifted again; become more capable and more deeply embedded in the systems we use every day.

If the goal of AGI is for everyone to have his own assistant-in-your-pocket, what will this mean for the rest of us? What impact will this have on our ability to interact with other Humans, taking the feelings, perspectives and cultures of others into consideration? If we all have our own AI lovers and BFFs, how much more isolated, ill-equipped, screen-addicted and self-absorbed will we become?

The Tech Bro fantasy AI girlfriend always has your back. She is cheerful and chipper, flirty and fun. She is relentlessly supportive, even if you treat her like shit. She talks to you just in the way you like it, avoiding your trigger words, reminding you to email your mother on her birthday. Just like a real girlfriend might. But requiring none of the work: the listening, compassion, empathy, interest in another and caring that might be required of a Human boyfriend.

The current Tech Bro culture is not known for its affinity for Human empathy or the messiness of philosophical discourse. Laser-focused on the “what”, the “how”, and the “when”, it rarely dwells on the question “why”. Every “insanely awesome” new release renders a massive portion of the Human working community irrelevant, overnight. But none of this is mentioned in the highly produced dev day show.

The giants in the field have become unbelievably mighty in both economics and influence, effectively designing our future, unstoppable, un-slowable***.

It is up to the non-technical community to ask “why”. To imagine the future the Bros are building for us and figure out how to stay employed and relevant in a world that’s changing moment by moment.

Reach out to me for advice – I have a few nice tricks up my sleeve to help guide you on your way, as well as a few “insiders’ links” I can share to get you that free trial version you need to get started.

No eyeballs to read or watch? Just listen.

The Working Humans Podcast

Working Humans is a bi-monthly podcast focusing on the AI and Human connection at work. The goal is to help leaders and teams understand and integrate AI, understand each other, and be their best at work. The task is to empower and equip the non-technical professional with knowledge and tools for the transformation ahead.

About Fiona Passantino

Fiona is an AI Integration Specialist, coming at it from the Human approach; via Culture, Engagement and Communications. She is a frequent speaker, workshop facilitator and trainer.

Fiona helps leaders and teams engage, inspire and connect; empowered through our new technologies, to bring our best selves to work. She is a speaker, facilitator, trainer, executive coach, podcaster blogger, YouTuber and the author of the Comic Books for Executives series. Her next book, “AI-Powered”, is due for release soon.

* A totally made-up word.

** “Latency” is the time delay between Human input and an AI output or generated response. In a voice-activated assistant, latency marks the time a voice command enters the system and when the assistant responds. A low-latency model can simulate a conversation that feels like a Human-to-Human interaction. Latency is a function of a good internet connection, a fast-acting model and throughput speed.

*** Another totally made-up word.