Predicting Words, Reshaping Our World

*I’m sure this won’t be everyone’s cup of tea. Large Language Models (LLMs) are bonkers! I know how they are supposed to work, how they do things, but everyday I use them I find them more and more surprising. This latest post is an example of Googles latest Gemini 2.5 Pro Preview 05-06, I’ve edited the content a little, but not very much. At the rate that this technology is going, the only thing you’ll need is a good understanding of how to talk to these chat models.
**On the topic of Googles latest model, it’s my newest most favourite model. I’ve used it for python coding, horse racing predictions, gadget troubleshooting, list comparisons.
***On the python coding, it’s the first model I’ve used that seems to just ‘get it’, you can give it a task, it will do its best, if not you can ask it to revisit and refine. It’s a way off from anyone being able to code, I think you’ll still need some kind of software engineering background.

For me it’s fascinating, how a technology that, at its core, is about predicting the next word in a sequence can ripple outwards to touch nearly every facet of our lives? I recall years ago, grappling with natural language processing in its more rudimentary forms, the sheer computational effort required for even basic comprehension tasks was immense. We were building systems, block by block, trying to imbue machines with a semblance of understanding. Now, we find ourselves in an era where Large Language Models, or LLMs, are demonstrating capabilities that many of us in the computer science and IT fields, even with decades of experience, find both exhilarating and profoundly unsettling. The speed of this evolution is, frankly, breathtaking, and it compels us to look beyond the immediate technical marvels and consider the deeper, systemic societal shifts they are beginning to catalyse. What does it truly mean for our society when machines can converse, create, and even reason with increasing sophistication? This isn’t just a question for technologists; it’s a question for everyone.

To appreciate the current moment, it’s worth remembering that the journey to LLMs has been a long and incremental one, built upon foundations laid decades ago. The early dreams of artificial intelligence, dating back to the Dartmouth Workshop in 1956, envisioned machines that could think and communicate like humans [1]. However, the path was fraught with “AI winters,” periods where funding dried up due to overblown promises and limited progress. In the trenches of IT, we saw the practical applications of AI slowly mature – from expert systems in the 80s and 90s, which were essentially sophisticated rule-based engines, to the machine learning algorithms that began to power search engines, recommendation systems, and spam filters in the 2000s. The key shift, from a systems perspective, was moving from explicitly programming every rule to allowing systems to learn patterns from vast amounts of data. This data-driven approach, coupled with significant increases in computational power (thanks to Moore’s Law and later, specialised hardware like GPUs) and algorithmic innovations, particularly in neural networks and deep learning, paved the way for the LLMs we see today [2]. The “Transformer” architecture, introduced in 2017, was a particularly pivotal development, allowing models to handle long-range dependencies in text far more effectively than their predecessors [3]. It’s this ability to process and generate coherent, contextually relevant text over extended passages that has made LLMs so transformative.

One of the most immediate and visible impacts of LLMs is on the nature of work and creativity. We’re seeing tools emerge that can draft emails, write code, generate marketing copy, create scripts, and even produce news articles. From an IT professional’s viewpoint, this feels like a massive acceleration of automation, but with a qualitative difference. Previous waves of automation often targeted repetitive, manual tasks. LLMs, however, are encroaching on tasks that were once considered the domain of human cognition and creativity [4]. Consider software development: an LLM can now suggest code snippets, debug errors, and even scaffold entire applications. This isn’t necessarily replacing developers wholesale, but it is changing the skill set required. The emphasis shifts from rote coding to problem definition, system design, and critically evaluating the output of an AI. It’s akin to moving from being a manual machinist to an operator of sophisticated CNC machinery; the craft changes, but the need for skilled human oversight, at least for now, remains paramount. The challenge, of course, is ensuring that this transition doesn’t lead to widespread job displacement without adequate pathways for reskilling and adaptation. It’s a societal system that needs careful re-engineering, not just a technological one.

Beyond the workplace, the influence of LLMs on information access and creation is profound. They have the potential to democratise knowledge creation, allowing individuals to generate sophisticated text and content without needing advanced writing skills. Imagine students using them to understand complex topics or small businesses creating professional-sounding communications. However, this very power brings with it significant risks. The ability of LLMs to generate plausible but entirely fabricated information – what some call “hallucinations” – poses a serious threat to our information ecosystem [5]. In a world already grappling with misinformation and disinformation, LLMs can act as potent force multipliers. We’re essentially handing a powerful content generation engine to anyone, and the distinction between human-authored and AI-generated text can become increasingly blurred. This raises fundamental questions about authenticity, trust, and the very nature of truth in the digital age. As a systems thinker, I see this as a critical vulnerability. If the inputs to our societal decision-making processes (news, information, public discourse) are easily corrupted or fabricated at scale, the entire system becomes unstable. There’s an urgent need for robust detection mechanisms, ethical guidelines for LLM deployment, and a significant public education effort to foster critical media literacy.

The educational sector itself is standing at a fascinating, if somewhat daunting, crossroads. LLMs offer incredible potential as personalised tutors, research assistants, and tools for creative exploration. They could, in theory, adapt to individual learning styles, provide instant feedback, and help students overcome learning hurdles in ways that a single teacher managing a large class simply cannot [6]. I can envisage a future where learning is far more tailored and engaging. Yet, the concerns are equally significant. How do we assess genuine understanding when students can generate essays or solve problems with AI assistance? The traditional paradigms of assessment are being fundamentally challenged. This isn’t just about preventing cheating; it’s about rethinking what we want students to learn and how we want them to develop critical thinking skills in an AI-augmented world. Perhaps the focus needs to shift from the production of specific outputs to the process of inquiry, analysis, and the ability to critically engage with AI-generated information. It’s a pedagogical system that requires a significant redesign, moving from information recall to information curation and critical application.

The ethical considerations surrounding LLMs are multifaceted and deeply complex, extending far beyond misinformation. Bias, for instance, is a critical issue. LLMs are trained on vast datasets scraped from the internet, and these datasets inevitably reflect the biases, prejudices, and societal inequalities present in human language and culture [7]. If an LLM is trained on text that underrepresents certain groups or contains stereotypical portrayals, it will likely perpetuate and even amplify these biases in its outputs. This has serious implications for fairness and equity, particularly if LLMs are used in sensitive applications like recruitment, loan applications, or even criminal justice. As programmers, we understand the principle of “garbage in, garbage out.” The “garbage” here isn’t necessarily overt error, but subtle, ingrained biases that can have very real-world discriminatory effects. Mitigating these biases is an ongoing research challenge, requiring careful dataset curation, algorithmic adjustments, and continuous auditing. It demands a conscious effort to build systems that are not just technically proficient but also ethically sound and socially responsible.

Furthermore, the question of intellectual property and authorship is becoming increasingly tangled. If an LLM generates a novel, a piece of music, or a work of art, who owns the copyright? The AI? The user who prompted it? The creators of the AI model? Current legal frameworks are ill-equipped to handle these novel scenarios [8]. The very act of training LLMs on vast swathes of copyrighted material without explicit permission is already a contentious issue, with lawsuits pending. This isn’t merely a legal quibble; it strikes at the heart of how we value and protect creative work in an age where machines can emulate human creativity at an unprecedented scale. We need to establish new norms and potentially new laws that balance the potential of AI with the rights of creators. It’s a delicate balancing act, requiring a nuanced understanding of both the technology and the principles underpinning intellectual property.

There’s also a broader, almost philosophical implication regarding human connection and communication. As we increasingly interact with AI systems that can mimic human conversation with remarkable fluency, what does this do to our interpersonal relationships? Could reliance on AI companions or conversational agents diminish our capacity for genuine human empathy or make us less patient with the complexities of human interaction? Sherry Turkle, a sociologist who has studied the impact of technology on human relationships for decades, has raised concerns about “artificial intimacy” and the potential for us to prefer the curated, always-available nature of AI companionship over the messier, more demanding reality of human connection [9]. Whilst LLMs can offer comfort or assistance to those who are isolated, we must be mindful of the potential for over-reliance and the subtle ways inorking which they might reshape our social fabric. This is not to say the outcome is predeterminedly negative, but it requires conscious societal reflection on the kind of digitally mediated social life we want to cultivate.

Looking ahead, the trajectory of LLM development suggests even more powerful and integrated systems. We’re likely to see them become more multi-modal, capable of understanding and generating not just text but also images, audio, and video. The potential for beneficial applications in areas like scientific research (analysing complex datasets, hypothesising new theories), healthcare (assisting with diagnosis, drug discovery), and accessibility (providing advanced assistive technologies) is immense. However, with increased capability comes increased responsibility and, potentially, increased risk. The concentration of power in the hands of a few companies that can afford to build and train these massive models is also a concern, raising questions about monopolistic control, access, and the democratic governance of such potent technology [10]. It’s a pattern we’ve seen before in the IT world – the initial phase of decentralised innovation often gives way to consolidation, and we need to be vigilant about ensuring that the benefits of LLMs are broadly distributed and that their development is guided by public interest, not just commercial imperatives.

Ultimately, the societal impact of Large Language Models will not be determined by the technology alone, but by the choices we make in how we develop, deploy, and regulate them. It requires a multi-stakeholder approach involving technologists, policymakers, educators, ethicists, and the public at large. From a systems perspective, it’s about designing feedback loops, checks, and balances to ensure that this powerful tool serves human values and contributes to a more equitable and informed society. We can’t simply let the technology wash over us; we need to actively shape its integration into our lives. The analytical, problem-solving mindset that is honed through a career in computer science is valuable here, not just for building the models, but for deconstructing their societal implications and designing robust frameworks for their governance. The task before us is to harness the extraordinary potential of LLMs whilst mitigating their inherent risks, a challenge that calls for both technical ingenuity and profound human wisdom. The conversation is just beginning, and it’s one that requires all our voices.

What truly keeps me pondering is not just the capabilities of these models today, but the pace of their evolution. If we’ve come this far in just a few short years, where will we be in another five or ten? And are our societal structures – our laws, our educational systems, our ethical frameworks – agile enough to adapt? That, perhaps, is the most critical system we need to debug and upgrade.

References and Further Reading:

1. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). *A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.* (A foundational document in the history of AI).

2. Goodfellow, I., Bengio, Y., & Courville, A. (2016). *Deep Learning.* MIT Press. (A comprehensive textbook on deep learning).

3. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). *Attention is All You Need.* Advances in Neural Information Processing Systems, 30. (The seminal paper introducing the Transformer architecture).

4. Brynjolfsson, E., & McAfee, A. (2014). *The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies.* W. W. Norton & Company. (Discusses the impact of technology on labour).

5. Marcus, G. (2022). *Deep Learning Is Hitting a Wall.* Nautilus. (An article discussing limitations, including ‘hallucinations’ in LLMs). Alternative might be: Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., … & Fung, P. (2023). Survey of hallucination in natural language generation. *ACM Computing Surveys, 55*(12), 1-38.

6. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? *International Journal of Educational Technology in Higher Education, 16*(1), 39. (While pre-dating the most recent LLM boom, it sets context for AI in education).

7. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). *On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?* FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623. (A critical paper on biases and ethical risks of LLMs).

8. Samuelson, P. (2023). Generative AI Meets Copyright. *Science, 381*(6654), 158-160. (Discusses the copyright implications of generative AI).

9. Turkle, S. (2011). *Alone Together: Why We Expect More from Technology and Less from Each Other.* Basic Books. (Explores the impact of technology on human relationships).

10. Ahmed, N., & Wahed, M. (2023). The GAI Divide: How Generative AI Is Creating a New Digital Divide. *Brookings Institution Report.* (A hypothetical but plausible type of report discussing concentration of power and access issues). More realistically, a piece like: Competition and Markets Authority (UK). (2023). *AI Foundation Models: Initial Report.* (Actual reports from regulatory bodies are emerging).

If this has interested you, you might like to delve into some of the works cited, particularly the original “Attention is All You Need” paper for a technical understanding, or Sherry Turkle’s books for a more sociological perspective on technology’s human impact. The field is evolving rapidly, so current articles in reputable journals and tech publications are also invaluable.


Large Language Models (LLMs) are rapidly evolving, impacting work, creativity, and societal systems. While offering automation and educational benefits, they pose risks like misinformation, ethical biases, and intellectual property issues. Addressing these challenges through careful governance and adapting societal structures is crucial for managing their transformative power and future.

Leave a comment

Conversations with AI is a very public attempt to make some sense of what insights, if any, AI can bring into my world, and maybe yours.

Please subscribe to my newsletter, I try to post daily, I’ll send no spam, and you can unsubscribe at any time.

Go back

Your message has been sent

Designed with WordPress.