Photo Credit: Julia Demaree Nikhinson/AP
AI is all the rage these days.
It just might be the new arms race.
Just a couple of days ago, President Trump was flanked by Larry Ellison (Oracle Co-Founder), Sam Altman (OpenAI Founder) and Masayoshi Son (SoftBank CEO) as they announced Stargate. A $500 billion investment in “AI infrastructure”.
Private companies have invested hundreds of billions in AI and GenAI these past couple years and it appears that the hype train around “AI” is nowhere near slowing down.
Out of my own selfish interest regarding my career and future, I set out to explore nine key questions about GenAI:
What is it?
What’s its history?
Who are the pivotal figures?
Which companies are leading the space?
What are its capabilities vs. limitations?
Did we overestimate its impact?
How has it reshaped work and what’s next?
Which jobs are at risk and how should we adapt?
Where can we learn more?
What follows is a buffet-style exploration of these questions. Take what serves you best.
Let’s dive in!
What is GenAI?
Generative artificial intelligence, hereafter referred to as “GenAI”, refer to computer systems that can generate original outputs that mirror human-generated content (ie. text, images).
GenAI use large language models (LLMs) and other deep learning architectures to do so.
For more background / terminology:
Deep learning is a type of machine learning that uses neural networks with multiple layers as a sort of sophisticated pattern recognition system. The “deep” in deep learning merely means that it has many layers of nodes, allowing it to learn increasingly complex patterns.
LLMs are a specific application of deep learning, focused on processing and generating human language.
Machine learning (ML) describes the manner in which computers learn patterns from data in without explicit “rules”. We provide the model with a high volume of cases / examples and allow for it to generate the rules (ie. determining what is vs. is not dog based on thousands of images).
Neural networks are a specific type of ML inspired by how our brain operates. The connection of nodes and their strength (metaphorically) mirror that of neurons in our brain. Stronger patterns have stronger connections.
The strength of connection between nodes is determined through an iterative process of comparing predictions to the correct answer and adjusting weights based on the level of error. The weights are adjusted through backpropagation and gradient descent, and this process can be repeated hundreds / thousands / millions of times to increase the predictiveness of the model.
Backpropagation is the method for calculating how much each connection contributes to the error (the gradient).
Gradient descent is the method for actually updating weights / connections based on calculations.
If you took a fan boat out on the Everglades, gradient descent would be the motor and backpropagation would be the steering wheel.
The motor provides power to move in a direction, while the steering determines which direction to take based on feedback. Together, they help navigate toward the destination — just like how these mechanisms work together to optimize neural networks.
What’s the history of GenAI?
The early foundations of GenAI trace back to AI’s introduction at the 1956 Dartmouth Conference, through the use and development of backpropagation in the 1980s/90s, through Geoffrey Hinton’s work on deep learning architectures in the 2000s, through Google’s transformer architecture introduction in the late 2010s, through OpenAI’s ChatGPT igniting public interest in 2022, through the recent announcement of Stargate (a $500 billion investment in AI infrastructure), to the use cases and debates we see on the public forum today.
While early work was foundational and slower moving, there’s been a major uptick in the GenAI space and the future is a wide open frontier.
Who are the pivotal figures?
While Geoffrey Hinton, Yann LeCun and Yoshua Bengio are often called the “Godfathers of AI”, Hinton was the one who pioneered deep learning and backpropagation (fundamental concepts for GenAI).
And while the list could certainly go on, it’s worth highlighting Ashish Vaswani and his colleague’s for their pivotal work in introducing the Transformer architecture that forms the backbone of modern LLMs.
These pivotal figures stood on the shoulders of the giants that preceded them (ie. Alan Turing — the inventor of the Turning Test and John McCarthy — the organizer of the Dartmouth Conference that coined the term “artificial intelligence”), much like those that follow them will stand on their shoulders.
Who are the most impactful companies of today and recent years in the GenAI space?
DeepMind was founded in 2010 with the ambitious goal of developing artificial general intelligence (AGI). They were initially focused on gaming, were purchased by Google in 2014 for $500 mil and merged with Google Brain in 2023 to consolidate Google’s AI efforts.
OpenAI was founded in 2015 as a non-profit with the express aim of ensuring AI would benefit all humanity. While its original intent was to research AI in an open and transparent manner (hence the name), the business model changed in time and Microsoft became a major investor.
OpenAI’s GPT-4 (generative pre-trained transformer) and ChatGPT ushered in widespread adoption and use, and have become synonymous with the GenAI space much like “Googling” became synonymous with internet searching decades ago.
With that said, concerns have been mounting since Microsoft became a major investor of OpenAI and upstarts have emerged, including Anthropic’s Claude and X’s Grok. Each generative transformer model have their own strengths and quarks.
Needless to say, it’s on. The next few years will be mighty interesting.
What are GenAI’s present capabilities vs. limitations?
Key capabilities include and are not limited to:
Text generation and understanding (ie. NLP),
Code generation and analysis,
Image generation and manipulation, and
Pattern recognition.
Key limitations include and are not limited to:
Accuracy and understanding (ie. producing “hallucinations”),
Technical and practical constraints (ie. cannot open files or send emails), and
Ethics and safety (ie. understanding emotional nuance).
GenAI has powerful capabilities but shouldn’t be used with a “set it and forget it” approach.
Take coding tasks: while tempting to string together generated code snippets, this can lead to missing variables and broken functionality. As these pieces accumulate, tracking down bugs becomes increasingly complex.
It helps to think of GenAI as a brilliant but overly technical colleague who needs context and some hand-holding to be effective.
It excels at documenting existing code, suggesting optimizations (when immediately validated), and guiding development — but requires careful oversight.
Where did we think we’d be today? Were there forecasts of “automating away all the work”?
From the early 2010s through early 2020s, many experts predicted fully autonomous vehicles being ubiquitous by 2025 (they aren’t), AI automating away routine office work (it hasn’t) and the potential for robo-Docs to largely replace human physicians (they haven’t).
Experts, like Ray Kurzweil, thought we’d be further along at this point in many domains. Where there was an overestimation on specific, physical use cases (ie. robotics), the social impact of chatbots was largely unexpected.
We’ve consistently overestimated the timeline for physical-world AI tasks while underestimating that of language and creative capabilities. It might be because our minds look for linear progress whereas what we’ve seen have been sudden, unexpected jumps in new directions.
As such, we should be humble regarding current AI development predictions and open to it unfolding in ways that are unforeseeable.
How has GenAI already reshaped our work? And what does the future hold for GenAI?
In the past couple years GenAI has begun to reshape work in:
Content creation: accelerated production, refinement and rapid iteration
Software development: streamlining routine tasks, debugging, documentation and accelerated learning for new developers.
Knowledge work: streamlining information synthesis, communications, and data analysis
Customer service: faster response times and better triaging of issues
And in the near future we might expect changes across:
Workflows: deeper integration and greater capabilities for existing tools (ie. better recall for Claude) and the development of more specialized tools (ie. medical LLMs).
Skills: educational evolution (ie. accelerated upskilling via AI assistance), the emergence of new roles (ie. prompt engineer) and a greater emphasis on creativity and judgement in the workforce rather than rote memorization and repetitive task completion.
Organizations: adapted division of labor between humans and AI (ie. evolved management) and flatter hierarchies / fewer middle management layers (ie. AI for decision support rather than consulting “up the chain of command”).
The past couple years we’ve begun to sense the potential that GenAI has and it’s brought about a wave of fear and excitement.
The key to the future lies in adaptation and the simple fact that AI will augment the reality of work. While some believe a whole scale elimination of work is on the horizon, I don’t.
I believe apocalyptic predictions about the “end of work” are overblown and the reality of the future of work more likely lies in adaptation and augmentation.
We’re to adapt and upskill in a future where our roles and responsibilities will be augmented (and enhanced) by GenAI.
What jobs are most vs. least at risk and how should we adapt?
The jobs that are most at risk are centered around content creation or editing, routine analysis, rote research and documentation, basic software development, and otherwise repetitive admin tasks and data entry. Imagine an entry level business analyst or software developer.
The jobs that are least at risk are centered around deep human knowledge and EI, physical manipulation / real-world adaptation, nuanced or creative judgement, and integration of knowledge across multiple domains. Imagine a high level product owner, physical therapist or carpenter.
With these characteristics in mind, we can adapt by focusing on:
Distinctly human capabilities: EI and relationship building, problem solving across domains, creative vision and novel idea generation.
Working effectively with AI: building prompt engineering skills, understanding AI’s strengths and limitations, learning to validate outputs while using AI to augment existing workflows.
The keys to the future may just lie in building the rare and ever-valuable combination of technical and interpersonal skills. Building knowledge across domains while emphasizing in-automatable skills in combination.
Where can people go to learn more about GenAI?
People can checkout free online courses like DeepLearning.AI’s ChatGPT Prompt Engineering for Developers or Microsoft’s Introduction to Artificial Intelligence for Trainers on and/or OpenAI vs. Anthrophic’s technical documentation.
Author’s note*: for more details on this or any of the above sections, prompt* ChatGPT or Claude (for free) with the Q at the top of the section of interest (ie. “what is GenAI?”).
Photo Credit: Houghton Mifflin Harcourt
In this age of transformation, curiosity isn’t just a spice of life — it’s our compass for navigating the future. — The Evolution of BI Reporting
GenAI represents both challenge and opportunity. While it will reshape many aspects of work, the key lies not in resistance but in adaptation. The future belongs to those who can combine human creativity, judgment, and emotional intelligence with AI’s computational power and pattern recognition capabilities.
Success will come from developing skills that complement rather than compete with AI, and maintaining a curious, growth-centric mindset in this rapidly evolving landscape. After all, we can see and do things that GenAI cannot, and vice versa.
The goal isn’t to outrace AI, but to run alongside it — leveraging its capabilities while leaning on our uniquely human ones.