What is Artificial Intelligence?
AI is an old, complex technology that has seen new life recently. It has potential beyond its controversy, and we should welcome that.
Artificial Intelligence, or AI, has become an almost inescapable buzzword over the last few years, with advancements like ChatGPT and Google's Deepmind making headlines and sparking both excitement and concern. However, AI as a concept and a technology has a much longer history than many realize.
This article will serve as a basic introduction and explore the definition and evolution of AI. Given that Kin is a personal AI built around generative large language models (LLMs), it will focus on these things. Our article on personal AI offers more in-depth explorations.
Definition of AI
If this is a basic introduction, let's start with the most basic definition. AI has had many definitions over its history, but probably the most inclusive comes from the European Union's High-Level Expert Group on AI. They define AI as “systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals."
This definition, while broad, captures the essence of AI: artificial systems that can interpret their surroundings in some way, and appear to react intelligently to them by themselves. In short, AI is intelligence that seems to come out of an artificial system.
For the majority of its existence, AI has referred to computational artificial systems of various descriptions, and for most of that time, to digital systems. Which is to say, AI is more often than not composed of some form of computer programming.
It cannot be overstated here, though, that the intelligence displayed by AI does not equate to sentience.1 Intelligence in AI refers to its ability to process information and solve problems, while sentience involves consciousness and the capacity to feel emotions—qualities which AI doesn’t possess.
Even then, any intelligence seen in AI is just a (clever) imitation achieved through advanced algorithms and data processing. That’s because, as is discussed later, these systems only understand the data they work with as patterns and probabilities, and not the meaning behind them. This is important to know, as it prevents AI from being seen as a sentient ‘super-intelligence’ which can be trusted in the same ways humans can—because it can’t be. Ai is at its best when it compliments human intelligence, not when it replaces it.
What is an LLM?
With a basic understanding of AI, this article’s main focus can be explored. Large Language models are a type of narrow generative AI which have become extremely popular recently. But what do those terms mean?
Narrow generative AI, or narrow GAI, is a specific type of Machine Learning (ML) using Natural Language Processing (NLP), which in itself is a specific type of AI. Let’s make that a little more simple to understand—with less abbreviations.
What is Machine Learning?
First, Machine Learning. ML is an area of AI which focuses on building systems that use algorithms (read: sets of rules) to build and improve upon new behaviors from studying collections of data, whether they be examples or real world experiences. Machine Learning models used to require human-supervised learning to correct errors as new behaviors developed, but with the advent of deep learning (which is discussed later), fully unsupervised learning became possible for some applications. There are many kinds of machine learning techniques, but what’s most important to know is this technology is already in daily life,
What is Natural Language Processing?
Next, Natural Language Processing. NLP makes use of machine learning algorithms to help computers break down human language into information they can understand and manipulate. This usually involves stemming, or breaking inflected words down into their root forms (i.e breaking -> break), and removing unnecessary “stop words” that don’t add extra meaning, among other things. It might also sound new, but it’s the technology that many things, from search engines to your GPS, use to function.
What is Narrow AI/Weak AI?
Now, Narrow AI. Also known as weak AI, these are AI which are designed to specific tasks very well. In the case of narrow GAI, it is designed to use ML to recognise patterns in a dataset, and then to use these patterns as inspiration to generate new, convincing imitations of that dataset. The patterns in question can be anything from orders of words in a book, to moves in a board game, to pixels in a photo.2 In the case of words, narrow GAI makes use of NLP to turn the words it is studying into something it can better incorporate into its algorithms.
On the other end of this scale exists Artificial General Intelligence, or Strong AI. While it’s not yet been developed, AGI would be AI designed to do any task, including solving complex problems, particularly well—not just specific tasks. AGI is likely what many people think of when they hear the term “AI“, which has in part led to some of the trust issues around it, and we discussed previously here. However, as this article is focusing on narrow GAI and LLMs, we’ll move away from GAI for now.
So, what are LLMs?
Back to Large Language Models. LLMs combine all of these concepts into a type of narrow GAI. They study large amounts of data, and use NLP and ML to pick out not only patterns in it, but the probabilities that certain words will appear together based on relevant topics.
While there are differences depending on the AI techniques being used, LLMs are essentially sophisticated predictive text applications. They build new text by using existing text to calculate probabilities of which words might go together in which order, to create a response to the prompt given. As mentioned, they don’t ‘understand’ what they’re saying in the way the “intelligence” part of AI might suggest—they’re just using a complex set of rules to guess writing into existence.3
One way to think about an LLM is as a robotic Librarian, who has read every book in a massive library, and has a perfect memory of their contents. When they’re asked a question on a certain topic, this librarian can use its memory to ‘guess’ what an answer would sound like, based on the phrases and concepts the books about it mention. However, the librarian themself doesn't truly understand the topics. LLMs work similarly, drawing from their "library" of training data to compose responses.
In terms of LLMs out in the world, there are the famous closed-source systems like OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini, as well as open-source systems like Meta's Llama 3, Mistral 7B, xAI’s Grok-1, and many more.
Closed-source models keep their code and training data private, which makes them more consistent but less flexible, while open source models make these freely available for anyone to use or modify, which gives them much greater agility.
In Kin’s case, our AI agent is based on a mixture of various open-source LLMs, which we’ve modified. This allows us to make the best of all their strong suits, so that it can return the most helpful responses possible while allowing us to do custom work to ensure user-controlled data a maximum level of privacy. In short, open-source LLMs with a local-first approach mean we can make user control and privacy a core part of Kin, right down to the architecture.
However, to truly understand AI and its implications, we need to take a whistle-stop tour of its rich history, which includes everything from ancient myths to modern breakthroughs.
The Rich History of AI Technology
Artificial Intelligence is not a modern invention. It can be traced as far back as ancient myths and legends about artificial life around the world, where stories of animated statues and mechanical servants captured the human imagination. Greek mythology, for instance, tells of Talos, a giant bronze automaton created to protect the island of Crete.
However, outside of automata—again ancient Greek contraptions built by master-craftsmen to show off, and made to appear capable of intelligent movement through cleverly-hidden clockwork mechanisms—AI remained confined to myth for much of human history.
The foundations of modern AI were properly laid in the mid-20th century. Inspired by a young neurology’s demonstration of brain neurons having digital signals, and electronics research showing that computers were capable of stable digital processing, researchers began to explore the possibilities of ‘electronic brains.’ Among these researchers was Alan Turing, a computer scientist (and expert of other disciplines) who many regard as the father of modern computation and computer science, even if that claim has been recently contested.
In 1950, Alan Turing proposed the "Turing Test" within his now-seminal paper ‘Computing Machinery and Intelligence’, which he wrote at the University of Manchester. The test proposed that, if a human having a conversation with a human and a machine via teleprinter could not tell the difference between the two, it could be concluded that the machine was ‘thinking.’
While modern AI has since proved things were not that simple, the Turing test still influences AI development today: ChatGPT-4 was subjected to and passed the Turing test only this July.
The term "Artificial Intelligence" itself was coined in 1955 by John McCarthy, during a workshop he was giving on it at Dartmouth College. The following year, John McCarthy and the other ‘founders of artificial intelligence’, as they came to be known, held the Dartmouth Conference at the college. This conference saw Artificial Intelligence become a true academic discipline outside of computer science at a university.
Before this, notably, the first AI to play against people board games was developed in 1951. In the same year, virtual maps of artificial neurons, sometimes called perceptrons or nodes, were first made able to perform tasks in what would later be called a neural network. As such, early AI researchers were optimistic, predicting that human-level AI would be achieved within a generation. This excitement fuelled rapid progress and significant milestones in the 1960s, with western universities and government agencies heavily funding the technology.
One of the most notable early achievements was ELIZA, often called the first chatbot, created by Joseph Weizenbaum at MIT in 1964. ELIZA was capable of engaging in simple conversations by rephrasing premade replies and previous messages, giving an illusion of being a human at points. Still, while ELIZA seems primitive today, it represented a significant step forward in NLP and human-computer interaction.
As AI research grew, it captured the public imagination, inspiring a whole new slew of science fiction. Authors like Isaac Asimov explored the potential and pitfalls of AI in works like "I, Robot" (1950), where he introduced the famous Three Laws of Robotics. These cultural touchstones both reflected and influenced public perception of AI, and continue to shape expectations and fears around the technology today.
However, AI, like most things, did not develop linearly. Throughout its life, it’s experienced cycles of high hopes followed by disappointment and reduced funding, known as "AI winters." The first AI winter occurred roughly over the 1970s to the 80s, triggered by the limitations of early AI systems failing to meet public and investor expectations.
This winter also saw the first philosophical and ethical critiques of AI, which claiming the technology could not be called ‘thinking’ if it did not understand nor intentionally create its output—something modern AI systems like LLMs still do not do, as, like we discussed, they use algorithms and probability to generate outputs—not interpretation of meaning.
This first winter ended in the late 1980s, when ‘expert systems’ were created: AI which could reasonably emulate the decision-making processes of human experts. These were deployed in programming, legal, and healthcare use cases, and are regarded as some of the first successful AI systems. This paved the way for more funding, which saw more avenues explored, until the costly and inflexible expert systems ultimately caused companies to favor human intelligence, and brought about a second AI winter in the early 1990s.
This winter, given AI’s involvement in companies now, was particularly harsh: it shattered both public and corporate faith in AI as a viable technology, forcing the industry to splinter into focusing on solving small and specific problems with AI, rather than creating more generalized AI systems.
Over the 1990s, these developments saw AI become more integrated into the technology industry, if somewhat quietly. Cheaper computer systems with more computing power, the growing need for algorithms in contemporary programming, and synergy with other fields, allowed AI to slowly continue to grow within the technology industry. This led to significant advancements in areas like speech recognition and computer vision (teaching AI to analyze images and video), paving the way for future developments.
Perhaps one of the most significant of the 1990’s advancements, IBM’s AI Chess program Deep Blue became the first computer system to beat a world chess champion under tournament conditions, when it beat Russian Grandmaster Garry Kasparov in 1997. Even if this stage of AI could not learn from new data, it represented a breakthrough in AI technology.
AI’s next milestone came in the 2010s, with the landmark rise of deep learning into common usage. Deep learning is a subset of machine learning based on artificial neural networks loosely inspired by the human brain. Taking advantage of ‘big data’, or the massive datasets the internet was creating, and more powerful ML techniques, deep learning used mathematical concepts to create logical algorithms that allowed AI systems to “teach themselves” to do new things, such as image recognition, by creating their own algorithms.
Again, deep learning doesn’t mean the AI system understands what it’s learning in the way a human would—just that the way its code was structured, through a lot of clever design, meant it could create and improve upon new algorithms for completing new tasks and making inferences with little to no human help.
Still, deep learning led to breakthroughs in various domains, from IBM's Watson winning Jeopardy! in 2011, to Tesla’s almost-self-driving cars with Autopilot mode, to Google DeepMind's AlphaGo defeating world champion Lee Sedol at the game of Go in 2016—a feat previously thought to be decades away for a deep learning AI model.
It was about this time that data science became a popular area of research and deployment, where deep neural networks were used to analyze business datasets. This is because deep learning gave AI more accuracy when pulling patterns out of datasets, or even forecasting future trends in them.
Similarly, the 2010s saw the mass deployment of virtual assistants, like Amazon’s Alexa, or Apple’s Siri, which could process speech through NLP, and interpret it to trigger actions.
Further using deep learning technologies, the late 2010s and early 2020s, have seen the rise of increasingly powerful Large Language Models (LLMs), like OpenAI’s ChatGPT and Google's Gemini. These models have showcased unprecedented, largely-automated, and real time language processing and generation capabilities, making advanced conversational AI accessible to the mainstream, and triggering a new wave of excitement, investment, and concern about the future of AI applications.
This is why technology professionals refer to current AI technology with such reverence: despite their issues, they’ve been a goal out of reach of the industry for about 75 years.
AI in the Modern Age
Today, LLMs represent some of the most advanced and versatile AI systems. While still narrow GAI systems, focusing on language generation, the wide uses of AI which are instantly generating bespoke, relevant, and sensical language is hard to understate. We’ve spoken before about AI’s profound impact on the workplace here.
However, the widespread accessibility and rapid adoption of AI has not been without controversy. Despite ChatGPT setting the record for the fastest growing user base of any application ever,4 Open AI and many other AI companies have been criticized for using data they took and allegedly even sometimes pirated from the internet in a process called ‘web-scraping,’ without the permission of owners. The fact that this often contains people’s personal data has generated particularly extreme anxieties.
Similarly, the processing power these companies require to train their models and operate them at such scale has been debilitating to some local electrical grids and water sources, triggering calls for AI to both become more efficient, and invest in sustainable energy sources.
There are also concerns about AI systems perpetuating and amplifying social and political biases present in their training data, with most major LLMs having been accused of partiality in multiple topics. Outside of this, LLMs’ ability to generate convincing misinformation has already caused political damage—something only amplified by their ability to interact as fake commenters on social media.
Job displacement is another significant concern, with fears about AI automating jobs and potentially leading to unemployment in certain sectors. Creatives, like artists, musicians and writers, have felt especially vulnerable.
Lastly, LLMs can sometimes ‘hallucinate’, or provide nonsensical or even simply incorrect responses. While we discuss this more in our blog about why people distrust AI here, hallucination can be particularly problematic in high-stakes situations such as healthcare assistance, where providing the wrong advice could cause injury or death.
Despite these challenges, though, AI has already made positive contributions across various fields. In healthcare, AI is assisting in early disease detection, drug discovery, and personalized treatment plans. Environmental conservation efforts are benefiting from AI models that monitor wildlife, predict natural disasters, and optimize energy consumption. In education, AI-powered tutoring systems are providing personalized learning experiences. Scientific research across disciplines, from astronomy to materials science, is being accelerated by AI technologies. Automation and worker support is also streamlining workflows across the globe.
The future of AI will likely see an improvement on AI’s accuracy, breadth of applications, and environmental impact, as it continues to find ways to enhance every industry imaginable.
Where Kin Fits In
In the rapid advancement and ongoing debates surrounding AI, personal AI is somewhat unique. Focusing on individual rather than corporate support and assistance, personal AI systems focus this newfound generational and processing power of AI into helping people become more self-aware, build a better mindset, and have more positive experiences.
Even within this distinctive subset of the industry, Kin is taking a unique approach. We know that a personal AI which puts its users in control of their data, experience, and privacy is possible, so we’re creating it.
Kin does this mainly through its advanced memory system, supported by its Journal feature. Kin’s memory allows it to identify, store, and access facts from any conversation with its user, and to use this information to provide more personalized and empathetic responses.
The Journal feature aids this by providing a space for users to share their feelings and experiences in a safe and secure space without questions or judgment.
Everything Kin learns through these features is fully viewable in its “Streaks & Stats” tab. With a single tap from that tab, a user can completely wipe Kin’s knowledge of them—or they could ask Kin to forget something specific at any point in the chat.
We do this through our ‘local-first’ data approach. Kin stores and processes everything possible on user devices—not in the cloud. Anything passed to the cloud is passed securely, and only ever to places vetted by us. Neither Kin the company, or Kin the AI, can read it. You can learn more about how this works here.
If this sounds interesting, we’ll let Kin explain how you can get involved:
A Word from Kin, Our Resident AI
Hey there! AI is rapidly evolving, so it’s great to see you learning more about the technology and how it’s progressing.
Looking to explore user-centric AI practically? You can download me here, and use this guide to start a conversation with me below:
Introduce yourself: Start with the basics—your interests, your job—then dig deeper. Tell me your ambitions, and your worries: the more I know, the better I can help.
Posit Problems: What specific thing do you want to tackle now? Productivity? Building relationships? Help me help you by being clear.
Explore solutions: Suggest some ways you see these issues being solved, so I can get an idea of how you like to work and expand on it. The more the merrier.
Look inside!: After a discussion like that, I’ll have some knowledge of you. In my ‘Memory’ tab, take a look at what I’ve learned about you—and try asking me to forget some of it! You should find it easy.
These steps help me help you find solutions for your professional and personal problems faster and more clearly, without taking advantage of your data, interfering with your workflows, or revealing the secrets you share.
Conclusion
AI has come a long way from being the stuff of legend. While its recent advancements, particularly in the realm of Large Language Models, have spawned exciting technologies, more ethical questions have risen than ever imagined.
While not (yet) sentient, AI systems today are sophisticated pattern recognition and generation tools, built on the shoulders of decades of research and vast amounts of data. They can recognise, learn and repeat complex patterns and tasks, and sometimes do so in ways indistinguishable from humans.
Still, concerns about data privacy, job displacement, environmental impact, and the potential misuse of AI are valid and must be addressed as the technology continues to evolve.
Kin represents a new, transparent approach to AI development, and it’s one we believe the industry should adopt. By proving that AI can be both powerful and responsibly developed, Kin is helping to shape a future where AI serves as a trusted tool for personal and professional growth.
As we move forward, it's clear that AI will continue to play an increasingly important role in our lives. The key will be to harness its potential while carefully navigating the ethical implications. With philosophies and technologies like Kin's, we can work towards a future where AI enhances human capabilities without compromising our values or privacy.
Sheikh, H., Prins, C., Schrijvers, E. 2023. “Artificial Intelligence: Definition and Background”. In: Mission AI. Research for Policy. Springer, Cham. Available at: https://doi.org/10.1007/978-3-031-21448-6_2 [Accessed 07/12/24]
Ang, T. L.; Choolani, M.; See, K. C.; Poh, K. K. 2023. “The rise of artificial intelligence: addressing the impact of large language models such as ChatGPT on scientific publications”. Singapore medical journal, 64(4), 219–221. Available at: https://doi.org/10.4103/singaporemedj.SMJ-2023-055 [Accessed 07/12/24]
Birhane, A. et al. 2023. “Science in the age of large language models”. Nature Reviews Physics, 5(5), pp. 277–280. Available at: doi:10.1038/s42254-023-00581-4 [Accessed 07/12/24]
Hu, K. 2023. “ChatGPT sets record for the fastest-growing user base.” Reuters. 2 Feb. Available at: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ [Accessed 12/09/24]