*I love AI, I was in two minds as to whether I publish this post or not. For me it doesn’t quite hit the spot, but as this is an experiment with AI or more an LLM, it’s probably worthwhile to post both it hits and misses.
** AI – it’s really moving forward at some pace! As a species when was the last time we slowed a real technological advancement down just so we could weigh up the risks? Didn’t happen with fire, the Industrial Revolution, ICE engines, antibiotics, maybe we should?
*** This article hasn’t talked about AGI or ASI, the Llama model, which I use does have knowledge of this as well.
Computing and the development of Artificial Intelligence (AI) have become increasingly intertwined, with the potential to revolutionise numerous aspects of our lives. As we navigate this uncharted territory, it’s essential to understand the historical context, core theories, and recent advancements that have brought us to where we are today. The purpose of this article is to provide an in-depth exploration of the topic, while also highlighting the significance and relevance of computing and AI in our modern world.
The concept of AI dates back to ancient Greece, with myths about artificial beings created to serve human-like purposes. However, the modern study of AI began in the 1950s, with the Dartmouth Summer Research Project on Artificial Intelligence, led by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon [1]. This project marked the beginning of AI as a field of research, with the goal of creating machines that could simulate human intelligence. As computing technology advanced, so did the development of AI, with the creation of the first AI program, called Logical Theorist, in 1956 by Allen Newell and Herbert Simon [2].
The 1960s and 1970s saw significant advancements in AI, with the development of the first AI laboratory at Stanford Research Institute (SRI) and the creation of the first AI-powered robot, called Shakey, in 1969 [3]. The 1980s witnessed the rise of expert systems, which were designed to mimic human decision-making abilities in specific domains. However, the field of AI experienced a decline in the 1990s, due to the limitations of rule-based systems and the lack of progress in achieving true human-like intelligence.
The 21st century has seen a resurgence in AI research, driven by advances in computing power, data storage, and machine learning algorithms. The development of deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), has enabled AI systems to learn from large datasets and improve their performance over time [4]. According to Demis Hassabis, co-founder of DeepMind, “the key to achieving true AI is to create systems that can learn and improve themselves, rather than relying on human programming” [5].
One of the core theories in AI is the concept of machine learning, which involves training algorithms on data to enable them to make predictions or decisions. There are several types of machine learning, including supervised, unsupervised, and reinforcement learning. Supervised learning involves training algorithms on labelled data, while unsupervised learning involves training algorithms on unlabelled data to discover patterns or relationships. Reinforcement learning involves training algorithms to make decisions based on rewards or penalties [6].
Recent advancements in AI have led to the development of various applications, including virtual assistants, such as Siri and Alexa, image recognition systems, and self-driving cars. According to a report by McKinsey, the potential economic impact of AI could be significant, with estimates suggesting that it could increase global GDP by up to 14% by 2030 [7]. However, the development of AI also raises concerns about job displacement, bias, and accountability. As Andrew Ng, co-founder of Coursera, notes, “the biggest risk of AI is not that it will become superintelligent and take over the world, but that it will exacerbate existing social inequalities” [8].
The development of AI has also been influenced by the availability of large datasets and advances in computing power. The creation of datasets, such as ImageNet and CIFAR-10, has enabled researchers to train and test AI algorithms on a large scale [9]. The development of specialized hardware, such as graphics processing units (GPUs) and tensor processing units (TPUs), has also accelerated the training and deployment of AI models [10].
As we look to the future, it’s essential to consider the implications of AI on our society and economy. According to a report by the World Economic Forum, by 2022, more than a third of the desired skills for most jobs will be comprised of skills that are not yet considered crucial to the job today [11]. This highlights the need for education and retraining programs to prepare workers for an AI-driven economy. As Fei-Fei Li, director of the Stanford Artificial Intelligence Lab, notes, “the future of AI is not about replacing humans, but about augmenting human capabilities and creating new opportunities for growth and development” [12].
In conclusion, the development of AI is a complex and multifaceted topic, with a rich history and significant implications for our future. As we continue to advance in this field, it’s essential to consider the potential risks and benefits, as well as the need for responsible AI development and deployment. As we move forward, we must ask ourselves, what does the future hold for AI, and how can we ensure that its development benefits humanity as a whole? The answer to this question will depend on our ability to work together to create a future where AI enhances human life, rather than controlling it.
References and Further Reading:
- McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the 1956 Dartmouth Summer Research Project on Artificial Intelligence. Retrieved from https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
- Newell, A., & Simon, H. A. (1956). The Logical Theorist. The RAND Corporation.
- Nilsson, N. J. (1984). Shakey the Robot. SRI International.
- LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
- Hassabis, D. (2017). The future of artificial intelligence. TED Talks.
- Mitchell, T. M. (1997). Machine learning. McGraw-Hill.
- Manyika, J., Chui, M., Bisson, P., Woetzel, J., & Stolyar, K. (2017). A future that works: Automation, employment, and productivity. McKinsey Global Institute.
- Ng, A. (2017). The biggest risk of AI is not that it will become superintelligent, but that it will exacerbate existing social inequalities. Harvard Business Review.
- Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems.
- Jouppi, N. P., Young, C., Patil, N., & Patterson, D. (2017). In-datacenter performance analysis of a tensor processing unit. Proceedings of the 44th Annual International Symposium on Computer Architecture.
- World Economic Forum. (2018). The future of jobs report 2018. Retrieved from https://www.weforum.org/reports/the-future-of-jobs-report-2018
- Li, F. F. (2018). The future of AI is not about replacing humans, but about augmenting human capabilities. TED Talks.




Leave a comment