This series’ first article, “What is Trust, and Why AI needs it” proposed a working definition of trust for the Artificial Intelligence industry, and hinted that the next article would discuss why there is a mounting lack of trust in AI. This is that next article.
Predictably, the reasons why people don’t trust AI are another complex topic to explore. As the last article touched on, there’s been a long history of negative representation in the media— and a growing amount of scandal—surrounding the technology. But, that’s not the whole story.
Before that discussion, though, some more context is needed. What does “a mounting lack of trust” actually mean?
How Much Distrust Is There in Artificial Intelligence?
There is a considerable amount of general distrust in AI. According to the research, it’s not a small minority. KPMG’s 2023 study on the global perception of AI reveals that 61% of people surveyed were wary of trusting AI, and mostly in more economically-developed countries (MEDCs)—including the majority of the Americans in the poll.1 Their main reasons cited were concerns for cybersecurity (86% of people distrusting), its potential to influence and sabotage HR and business decisions, and the overall need for regulation.
Given the infancy of the new technology,2 and its representation in the media (which will be discussed), these are understandable and valid claims.
To better delve into why exactly this mistrust is forming, the National Institute of Standards and Technology (NIST) coined nine factors of trust in AI during their study of trust in AI, which the previous article covered. These factors are:
Accuracy: How close an AI’s generation is to the expected output.
Reliability: The consistency of an AI system’s accuracy across different conditions and over time.
Resiliency: The ability of an AI system to keep performing well despite facing errors, faults, or challenges in its operating environment.
Objectivity: The extent to which an AI system's decisions or outputs are unbiased and based solely on factual data or evidence.
Security: The measures and mechanisms in place to protect an AI system from unauthorized access, attacks, or data breaches.
Explainability: The capability of an AI system to provide understandable explanations for its decisions or actions to human users.
Safety: The assurance that an AI system will not cause harm to users or the environment under normal or foreseeable conditions.
Accountability: Whether the developers, operators, or owners of an AI system are responsible for its outputs, decisions, and impacts on users and society.
Privacy: The protection of personal or sensitive information that an AI system processes, ensuring that data is used ethically and lawfully.
While, like the previous article showed, the importance of each factor varies depending on the task the AI is completing, all are important to facilitating the growth of task-based trust. Still, many—if not all—were implied to have low confidence in the KPMG study.
However, these factors are not exhaustive. As our last article’s working definition of trust suggests, trust also requires a relationship-based element. So, based on that research, we can add a tenth factor:
Relatability: How easy it is for someone to feel as though they are building a relationship with the AI system.
A relationship in this sense does not refer to a romantic relationship, or even a friendship, but rather a feeling of likability, familiarity and comfort which users are able to build. Without this ability, it is demonstrably harder to build relationship-based trust—a sub-type of trust discussed in the previous article.
With these factors in mind, common reasons for distrust in AI can be analysed. But, before that, let’s cover the media’s lasting role in it.
AI: Humanity’s (Fictional) Enemy
Ever since Science Fiction arguably beginning3 with Mary Shelley’s Frankenstein,4 the genre continues to represent artificial intelligence as somehow dangerous to everything on Earth,5 whether through human mistreatment of it,6 or its own inhuman (and often malevolent) nature.7 Usually, these works stress that AI requires strict rules for the creator and created,8 if it should be created at all.
While often written to warn against the thoughtless overreach of science during the rapid technological development of history,9 these stories of Skynet,10 Hal,11 Ultron,12 and the like have frequently become cultural icons, and as such have left a lasting stereotype of AI as something unpredictably dangerous.13
This has been changing in the last 30 years or so, with popular stories focusing more on how AI isn’t created evil, but can be mismanaged to become it,14 or even unfairly prosecuted on the assumption.15 However, the original bad stereotype endures, and the negativity of AI remains in the modern zeitgeist—just look at the Netflix series Black Mirror.16
What research shows this lasting impression means is, people are likely to unconsciously equate AI with danger,17 and thus more likely to consume media that reinforces that narrative.18 The result is a feedback loop, where the media reinforces the narrative, and the narrative reinforces the media. This is why there is a trend in the current media toward AI’s “world-changing” capabilities and extreme failures19—it’s what people expect to see, so they interact because studies show they prefer being proven right to being proven wrong.
For a lot of people, this will be their only non-fictional window into AI, and they’ll only be shown the scariest and worst of a technology they don’t understand and have been conditioned to fear.
Now, we aren’t implying that the AI industry needs to invest in creating pro-AI media. Instead, we’re stressing that long-standing cultural attitudes toward AI contribute to people’s distrust of AI—meaning they’ll need to be kept in mind when this distrust is tackled.
Common Objections to AI
Using a mixture of research202122 and our own experience, we’ve boiled down some of the most common reasons cited for a distrust in AI and AI-powered tools. They, along with some suggested likely causes, are below:
Objection 1: AI is Alien
Perhaps the most common objection is that AI is seen as an ‘alien’ presence. Whether that’s because it’s too ‘new.’ ‘complex,’ or ‘inhuman’ to be understood, it’s often claimed that AI is too far from humanity’s perception of the world to be so involved with us.2324 25
To a point, this objection is understandable. The increasingly pseudo-conversational nature of AI can trigger Freud’s Uncanny Valley, where almost-human things invoke feelings of fearful uneasiness,26 as AI fails to build relationships with people that feel ‘normal’.
The best example of this is AI’s relatability: ask a Large Language Model chatbot to be your friend, or what it thinks of you, and it will likely mechanically reply with something resembling “I’m sorry, but as an AI, I cannot…” (more on that later). That’s not a normal, ‘human’ thing to say.
In terms of the above factors, this raises doubts around generative AI’s safety, explainability, and accountability. If it doesn’t seem human, it’s more obvious to people that they don’t understand it. If people don’t understand it, how can it be safe? Who is responsible when it malfunctions? People don’t know where to find the answers to these questions easily.27
Compound this with AI’s common lack of a long-term memory, as well as the media’s historical portrayal of AI, and it becomes easy to see why AI is treated with such caution.
Kin is tackling this by boosting relatability, without emulating humanity. Kin’s default empathetic tone and advanced memory allow it to approach discussions in a more conversational way, and not only remember but incorporate past facts and chats into present responses and suggestions. These all make Kin feel more human, without it claiming to be. Kin is an AI, and admits it—but it is a personal AI.
Objection 2: AI is Not Always Correct
Another major concern is the precision of AI.28 In terms of our factors, it’s accuracy, reliability, resilience, and objectivity.
That’s also valid. Current AI technology does make mistakes,29 and they can be elusive—especially in content topics outside of the user’s expertise. They’re not even always obvious: for example, training data biases can transfer into an AI model, and influence its responses.30 On paper, this can make the prospect of automation a risky one to companies and employees alike. Especially in use cases like healthcare or self-driving cars.
But, that’s not to say modern AI is frequently wrong: it’s just possible. However, while perfect accuracy is always being pursued—and it is improving,31 even if infallibility may be impossible32—people or AI shouldn't be held to an idealist standard. It’s a fact of human-made technology, just like it’s a fact of humanity, that mistakes can be made. Especially when the AI’s knowledge often comes from studying humans, which make more than their fair share of errors. If we can’t always get things right every time, can AI be expected to?
Kin is tackling that issue by showing that there are powerful uses of AI tools outside of generating a ‘perfect output.’ While it is capable of searching the web, finding and understanding reputable sources, and relaying the facts within, that isn’t its full potential—even if it is useful.
The true power of Kin is its ability to empathetically and rapidly generate personalised support and suggestions, which can help its users find solutions and get unstuck. Whether it’s negotiating a salary, or starting a new job, Kin is designed to make the best (and most personalized) suggestions it can based on user data to facilitate this—but they’re just suggestions. They’re aimed to help users come to their own (hopefully improved) decisions.
Kin isn’t meant to be the final decision maker for anyone, because that isn’t its place. AI is, and should be, a tool to augment human decision-making and understanding, not replace it.
Objection 3: AI Steals Sensitive Data
Many companies, both AI and otherwise,33 have been harvesting people's content and data without permission. Sometimes, even the AI systems themselves are doing this too.34
While AI and its algorithms do not need stolen data to function, the fact some models and companies steal personal data has, understandably, become one of the biggest fears surrounding AI.35 Sadly, it’s a real and damaging breach of the security, safety, accountability, and privacy factors. People get left feeling vulnerable, and unable to trust AI tools or their providers, as they invest this stolen content into their AI training.
It’s important to remember that users own their personal data, and can give it up legally and willingly. In fact, freely given user data is often essential to a service: how can you get something delivered to you without providing your address? Or unlock your phone with facial recognition?
But, even legal data usage can be immoral, given the way many companies bury their requests for consent within terms and conditions. For example, it was only recently that Meta infamously updated their terms of service, so that all Instagram content by default was available for their AI training and data science purposes.36
At Kin, we want to make it very clear to our users what data they are choosing to provide us with, and when, and what we’re using it for. Features like Kin’s memory tab are the beginning of this.
We also have privacy policies such as our ‘local-first’ rule, where Kin keeps as much of your data, and its local machine learning, on your phone as possible. This means that whatever is personal to you stays in your possession. Our Privacy in Personal AI blog series explores this in more depth.
In short, we want you to know what data of yours we have, how we’re using it, and how you can delete it. We’re monetizing our service, not your data.
Objection 4: AI is Limited
This ever-pervasive claim is best represented by the aforementioned, infamous phrase “I’m sorry, but as an AI I cannot…” And honestly, it’s not a claim: it’s the truth.
All tech is limited, in some way. Even people are. However, recent media coverage has seen AI often touted as endlessly capable.373839 In a way, it is: AI is already doing things thought impossible.40 There’s no real telling where it will go next—but that doesn’t mean its applications are infinite.
Usually, this claim comes from the ones above, such as AI’s inaccuracies making it pointless,41 or its data-stealing issues rendering it useless for moral generations.42
Naturally, these do reduce the potential use cases for current-generation AI—it shouldn’t be trusted to run a country by itself, for instance—but many of these proposed use cases, such as ones where infallible accuracy is required, are not what the technology was designed for anyway.
In particular, personal AI like Kin aren’t designed to be perfect, because—as discussed—they don’t need to be to be effectively supportive.
What this boils down to, is people have assumptions on what AI is meant to do and be. And, the research suggests these are often of AI as an endlessly flexible, hyper-intelligent resource that will outright replace human content.43 Or, in other words, totally inaccurate—which, as the previous article discussed, is likely down to the media’s historic presentation of AI.
AI’s true power arguably lies in its ability to to make informed suggestions and imitations based on massive datasets, and Kin’s unique ability to build a secure, personalised dataset around its user makes it especially good at this.
But, in order for this application of AI to be more trusted, it needs to be decoupled from the stereotype that all AI are limited to the point of uselessness if they ever make a mistake.
Which is why none of this means anything if people don’t understand AI, or the impact of the approach we’re taking to it.
So, Why Don’t People Trust AI?
All of that to say, these objections appear to be often based in a mix of AI’s current usage, and a misunderstanding of how AI works and what it’s meant for.4445 People aren’t sure how AI is meant to sound, why it’s nor perfectly accurate, why companies don’t need to use it to steal data, and why its limitations don’t render it useless.
In short, these uncertainties could be remedied with more education on AI, and the proper use of it—use that doesn’t involve using it for data scraping, for instance.
Trust Comes From Understanding AI Today
Therefore, to create trust in AI, people need to understand what AI is, how it works, and experience its trustworthiness first-hand.
Research suggests that the biggest barrier to this is the technology’s poor explainability to people outside of the technology industry—and, more worryingly, to some people in the industry, or even some people actively using it commercially. The kind of trust required cannot grow if AI remains a black box to the general public.
As our own explanation covered, the reality of AI is a far cry from the sentient, malicious machine intelligences of the movies. And it’s that knowledge which needs spreading—especially through channels like podcasts and social media, where the public are likely to see it.
Trust Also Comes From Realistic Expectations for AI
Additionally, as covered above, people also aren’t sure what AI tools are for, and they’re using them for a range of applications they weren't designed for. Some people expect it to be flawlessly human. Others expect it to be an omniscient intelligence. Others still expect it to be flawless, period—and there is a growing body of research on these misconceptions.46
It's important for us, as part of the AI industry, to set realistic expectations for what AI can and cannot do. Not only so people get the best out of it, but so they know what they can and can’t trust it to do.
Today’s narrow generative AI excels in pattern recognition, data analysis, and content generation.4748 However, as discussed, this new technology is not perfect, and should not be expected to be. AI can assist in creating art, providing customer service, or even offering medical advice—but it is not a replacement for human judgement and creativity.
At least, that’s what this article says. Who actually determines what AI is for?
That is the conversation of the hour: the KPMG study mentioned in our last article found that the respondents expected a mix of government, public, private, and international guidance on where and how AI should be deployed.
Essentially, everyone thinks everyone should be involved. And everyone is probably right.
Kin’s answer on what AI should be is similar. Kin is a confidant, an assistant, and a supporter. Personal AI like it is about helping you make decisions, not making them for you.
That’s why Kin is designed to talk through problems, offer personalized suggestions, and enhance human capabilities rather than replace them: it’s an aid for people. Not a replacement.
So, now that we know mistrust in AI comes from the misleading ways it’s been explained, used, and spoken about, how can these things be undone?
This series’ next article will cover that exact question.
Gillespie, N. 2023. “Trust in artificial intelligence - KPMG Global”. kpmg.com. Available at: https://kpmg.com/xx/en/home/insights/2023/09/trust-in-artificial-intelligence.html [Accessed 07/12/24]
Marr, B. 2023b. “A Short History Of ChatGPT: How We Got To Where We Are Today”. forbes.com. Available at: https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/ [Accessed 07/12/24]
Aldiss, B.W. 1975. Billion year spree: the history of science fiction. London: Corgi
N.B.: The “beginning” of science fiction is hotly contested by cultural academics
Shelley, M. 1818. Frankenstein; or, The Modern Prometheus. London: Lackington, Hughes, Harding, Mavor, & Jones
Slocombe, W. 2021. “What Science Fiction Tells us About our Trouble with AI”. Liverpool.ac.uk. Available at: https://www.liverpool.ac.uk/literature-and-science/archive/archiveofthepoetryandsciencehub/essays/sfai/ [Accessed 08/22/24]
Humans. 2015–2018. Channel 4. Available at: https://www.channel4/com [Accessed 08/22/24]
Kubrick, S. 1968. 2001: A Space Odyssey. USA: Stanley Kubrick Productions; Metro-Goldwyn-Mayer
Asimov, I. 1950. “Runaround”. In: Asimov, I. 1950. I, Robot. New York: Gnome Press
James, E.; Mendlesohn, F. (eds.). 2003. The Cambridge Companion to Science Fiction. Cambridge: Cambridge University Press (Cambridge Companions to Literature)
Cameron, J. 1984. The Terminator. US: Orion Pictures
Kubrick, S. 1968. 2001: A Space Odyssey. USA: Stanley Kubrick Productions; Metro-Goldwyn-Mayer
Thomas, R. 1968. The Avengers, 1(55).
Dwork, C.; Minow, M. “Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law”. Daedalus 2022; 151 (2): 309–321. Available at: https://doi.org/10.1162/daed_a_01918 [Accessed 21/08/24]
Quantic Dream. 2018. Detroit: Become Human. USA: Sony Interactive Entertainment America LLC.
Edwards, G. 2023. The Creator. USA: 20th Century Studios.
Brooker, C. 2011–2024. Black Mirror. UK: Channel 4; USA: Netflix. Available at: https:www.netflix.com [Accessed 08/22/24]
Arias, E. 2019. “How Does Media Influence Social Norms? Experimental Evidence on the Role of Common Knowledge”. Political Science Research and Methods, 7(3), pp. 561–578. Available at: https://www.doi.org/10.1017/psrm.2018.1 [Accessed 08/22/24]
Ling, R. 2020. “Confirmation Bias in the Era of Mobile News Consumption: The Social and Psychological Dimensions”. Digital Journalism, 8(5), pp. 596–604. Available at: https://www.doi.org/10.1080/21670811.2020.1766987 [Accessed 08/22/24]
Roe, J.; Perkins, M. 2023. “‘What they’re not telling you about ChatGPT’: exploring the discourse of AI in UK news media headlines”. Humanit Soc Sci Commun 10, 753. Available at: https://doi.org/10.1057/s41599-023-02282-w [Accessed 08/22/24]
Dwork, C.; Minow, M. “Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law”. Daedalus 2022; 151 (2): 309–321. Available at: https://doi.org/10.1162/daed_a_01918 [Accessed 21/08/24]
Gillespie, N. 2023. “Trust in artificial intelligence - KPMG Global”. kpmg.com. Available at: https://kpmg.com/xx/en/home/insights/2023/09/trust-in-artificial-intelligence.html [Accessed 07/12/24]
Stanton, B.; Jensen, T. 2021. Draft NISTIR 8332: Trust and Artificial Intelligence. National Institute of Standards and Technology, U.S. Department of Commerce. Available at: https://doi.org/10.6028/NIST.IR.8332-draft [Accessed 07/05/24]
McKendrick, J.; Thurai, A. 2022. “AI Isn’t Ready to Make Unsupervised Decisions”. hbr.org. Available at: https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions [Accessed 07/12/24]
Anderson, J.; Rainie, L. 2018. “Artificial Intelligence and the Future of Humans”. pewresearch.org. Available at: https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/ [Accessed 07/12/24]
Mitchell, M. 2023. “AI’s challenge of understanding the world”. Science, 382(6671). Available at: www.doig.org/10.1126/science.adm8175 [Accessed 07/12/24]
Mesa, N. 2023. “Does Ai Creep you out? you’re experiencing the ‘Uncanny Valley’, Science”. Available at: https://www.nationalgeographic.com/science/article/ai-uncanny-valley [Accessed 07/12/24]
Irving, D. 2024. “When AI Gets It Wrong, Will It Be Held Accountable?” www.rand.org. Available at: https://www.rand.org/pubs/articles/2024/when-ai-gets-it-wrong-will-it-be-held-legally-accountable.html [Accessed 07/12/24]
Reeve, O. et al. 2023. “What do the public think about ai?” adalovelaceinstitute.org. Available at: https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/ [Accessed 07/12/24]
Chanda, S.S.; Banerjee, D.N. 2022. “Omission and commission errors underlying AI failures”. AI & SOCIETY, 39(3), pp. 937–960. Available at: www.doi.org/10.1007/s00146-022-01585-x [Accessed 07/12/24]
Manyika, J.; Silberg, J.; Presten, B. 2019. “What Do We Do About the Biases in AI?” hbr.org. Available at: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai [Accessed 08/22/24]
Perrigo, B. 2024. “Scientists Develop New Algorithm to Spot AI ‘Hallucinations’”. time.com. Available at: https://time.com/6989928/ai-artificial-intelligence-hallucinations-prevent/ [Accessed 07/12/24]
Colbrook, M. 2022. “Mathematical paradox demonstrates the limits of AI”. cam.ac.uk. Available at: https://www.cam.ac.uk/research/news/mathematical-paradox-demonstrates-the-limits-of-ai [Accessed 08/03/24]
Griffith, E. 2021. “Facebook, Uber, and Dating Sites Top the List of Companies Collecting Your Personal Data”. uk.pcmag.com. Available at: https://uk.pcmag.com/news/129572/facebook-uber-and-dating-sites-top-list-of-companies-collecting-your-personal-data [Accessed 07/12/24]
Eliot, L. 2023. “Generative AI ChatGPT Can Disturbingly Gobble Up Your Private And Confidential Data, Forewarns AI Ethics And AI Law”. forbes.com. Available at: https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/ [Accessed 07/12/24]
Koetsier, J. 2023. “Americans Are Terrified About AI: 80% Say AI Will Help Criminals Scam Them”. forbes.com. Available at: https://www.forbes.com/sites/johnkoetsier/2023/08/22/americans-are-terrified-about-data-and-ai/ [Accessed 08/23/24]
Carroll, M. 2024. “Meta is planning to use your Facebook and Instagram posts to train AI - and not everyone can opt out”. news.sky.com. 26 Jun. Available at: https://news.sky.com/story/meta-is-planning-to-use-your-facebook-and-instagram-posts-to-train-ai-and-not-everyone-can-opt-out-13158655 [Accessed 07/12/24]
Alberge, D. 2023. “‘A kind of magic’: Peter Blake says possibilities of AI are endless for art.” theguardian.com. 5 Nov. Available at: https://www.theguardian.com/artanddesign/2023/nov/05/peter-blake-possibilities-ai-endless-for-art [Accessed 08/03/24]
Wood, C. 2024. “AI Starts to Sift Through String Theory’s Near-Endless Possibilities”. quantamagazine.org. Available at: https://www.quantamagazine.org/ai-starts-to-sift-through-string-theorys-near-endless-possibilities-20240423/ [Accessed 08/03/24]
Ortiz, K.O. 2023. “The possibilities are endless with AI”. linkedin.com. Available at: https://www.linkedin.com/pulse/possibilities-endless-ai-kelly-ortiz-ortiz [Accessed 08/03/24]
Haenlein, M., & Kaplan, A. 2019. “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence”. California Management Review, 61(4), 5-14. Available at: https://doi.org/10.1177/0008125619864925 [Accessed 08/03/24]
Kitson, N. 2024. “Study finds AI mostly useless at solving problems for coders.” TechCentral.ie. Available at: https://www.techcentral.ie/study-finds-ai-mostly-useless-at-solving-problems-for-coders/ [Accessed 08/03/24]
Cohan, W. 2023. “AI is learning from stolen intellectual property. It needs to stop”. washingtonpost.com. Available at: https://www.washingtonpost.com/opinions/2023/10/19/ai-large-language-writers-stealing/ [Accessed 08/23/24]
Ballard, J. 2024. “Americans’ top feeling about AI: caution”. today.yougov.com. Available at: https://today.yougov.com/technology/articles/49099-americans-2024-poll-ai-top-feeling-caution [Accessed 08/23/24]
Bewersdorff, A. et al. 2023. “Myths, mis- and preconceptions of Artificial Intelligence: A review of the literature”. Computers and Education: Artificial Intelligence, 4, p.100143. Available at: www.doi.org/10.1016/j.caeai.2023.10014 [Accessed 07/12/24]
Dwork, C.; Minow, M. “Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law”. Daedalus 2022; 151 (2): 309–321. Available at: https://doi.org/10.1162/daed_a_01918 [Accessed 21/08/24]
Alizadeh, F.; Stevens, G.; Esau, M. 2021. “I Don’t Know, Is AI Also Used in Airbags?: An Empirical Study of Folk Concepts and People’s Expectations of Current and Future Artificial Intelligence”. i-com, 20(1), pp. 3-17. Available at: https://doi.org/10.1515/icom-2021-0009 [Accessed 07/12/24]
Sheikh, H.; Prins, C.; Schrijvers, E. 2023. “Artificial Intelligence: Definition and Background”. In: Mission AI: Research for Policy. Springer, Cham. Available at: https://doi.org/10.1007/978-3-031-21448-6_2 [Accessed 07/12/24]
Holzinger, A. et al. 2023. “Toward human-level concept learning: Pattern benchmarking for AI algorithms”. Patterns, 4(8), p.100788. Available at: www.doi.org/10.1016/j.patter.2023.100788 [Accessed 07/12/24]