The previous article in this series explored why people don't trust artificial intelligence so that solutions addressing those concerns could be discussed. These AI solutions need to consider everything from algorithms to end users.
These trustworthy AI initiatives are in high demand, too. According to KPMG's previously referenced 2023 study on public perceptions of AI systems, 87% of people surveyed worldwide want restrictions on the use of AI.1
So, perhaps most surprisingly, the AI technology needed to address the objections raised in the previous article largely exists. That doesn't just suggest trustworthy AI systems aren't incredibly far off, but that ethical AI is arguably possible now. It just needs proper implementation, combining task- and relationship-based trust in its design while ensuring explainability.
If those terms sound unfamiliar, task and relationship based trust are covered in the first article in this series, which, along with the previous article, provides the necessary context for understanding AI risk and mitigation strategies.
Otherwise, let's discuss the technology hinted at above, and look at how AI models could practically become more trustworthy—starting with building trust through robust decision-making processes.
Task-based trust
Looking back at that KPMG study, the first thing responsible AI would require are robust AI ethics guidelines, especially for its participation in real-world tasks, workplaces and relationships.
The opinions given in that study imply that people's immediate concerns include limiting AI applications in HR decision-making like hiring and firing, ensuring that human values aren't compromised by generative AI outright, that AI-generated content isn't trusted at face value, and that both training data and user datasets are being handled responsibly.
Overall, KMPG's study suggests people would like to ensure that, where decisions involve people and their livelihoods, machine learning and language models still include human oversight to protect against the limitations that modern AI is arguably becoming known for.2
Even if AI hallucinations are being reduced through improved robustness, they are still a valid concern, affecting trust in everything from chatbots to healthcare applications. While a human’s decisions may not necessarily be ‘better’ than an AI’s in every situation,3 things like hallucinations means people feel more comfortable when a human is involved in some capacity.
In short, people want to be able to trust AI systems to complete tasks in a way that will improve their lives, not damage them—which for most end users means human oversight and clear metrics for success.
Doing This Practically
Of course, challenges like the "black box" nature of some AI algorithms (where sometimes not even Google understand their own AI systems)4 are still being solved in AI development. But there's nothing preventing companies from being more transparent about their assessments and supervision opportunities around their AI technology.
To do that, companies would first need to adopt AI governance frameworks, based on stakeholder concerns and real-world research. They'd then need to be transparent about what ethical guidelines are, and demonstrate how they're being followed in practice.
For an overall idea of what these guidelines should be, refer to the National Institute of Standards and Technology's (NIST) nine factors of trust (plus our tenth) from the previous article.5 Otherwise, this article will discuss more concrete examples and use cases.
Though not going down the path of Radical Transparency, which was discussed during our look at honesty in the workplace, engaged discussions between AI development teams and the public, alongside independent assessments of their systems, would be the beginning of some concrete use cases.
Even just a "devlog" style blog demonstrating these AI systems in action would help with interpretability.
In terms of NIST's nine factors of trust from the previous article, those initiatives would address safety, accountability and privacy: when AI applications aren't left alone for decision-making, aren't being solely trusted to provide important information, and aren't misusing datasets, the human element restores confidence.
Next up are the problems of accuracy, reliability and objectivity. For task-based trust, these metrics are the most important.
A trustworthy AI system would solve these partially by not just saying upfront that its algorithms can make mistakes—which most already do—but by discussing how known inaccuracies in machine learning models are being addressed.
However, there comes an undeniable point where, to build trust, an AI model just needs to be largely accurate, reliable and objective. It needs to be able to do what it says it can do every time it's asked.
For many AI solutions, this functionality largely exists, but can be improved upon. While perfection is impossible, addressing harmful bias in training data is increasingly feasible.
In the meantime, it's just important to acknowledge that AI-generated content can contain errors, and that the datasets many use can contain biases: when the information on AI systems lines up with their real-time output, people are much more likely to trust both the AI technology and its company.
After that, relationship-based trust needs to be tackled.
Practical, Relationship-Based Trust
As the previous article covered, to build a relationship for trust to grow from, AI needs to be personable, and be able to respond based on memorized personal data while maintaining cybersecurity.
Outside of the inherent way, task-based trust builds relationships, transparent and secure data management methods are probably the biggest factor in the AI lifecycle.
These things show end users that their human values and privacy are respected, which would in turn encourage users to share more data, which then helps build emotional investment and trust.
That would address the privacy, AI safety, and security factors. For accountability, explainable AI, and robustness, clear feedback loops would be particularly effective.
When people know they can give feedback on AI systems, can see it is being listened to, and have a way to confirm its impact, their relationship with not just the AI applications but the entire AI governance structure suddenly becomes two-way. People suddenly have a say in risk management, and know that whatever issues they raise will be considered.
As such, trust in the AI increases as not only can potential risks be reliably reported, but users can make their own mark on the AI—making it really personal.
Following on, AI development should incorporate adaptive learning algorithms, allowing AI systems to learn from mistakes and improve over time. This would best be implemented at the individual user level, to prevent two end users adapting a language model in opposite directions, and instead allowing each personal AI instance to shape itself to individuals, while removing the need for a more in-depth feedback process.
When AI can accurately and reliably remember to implement changes suggested by its users in real-time, it will only feel more like a relationship, and allow trust to build even more.
User-controlled data
These two elements converge and combine in one of Kin's central beliefs: we don't think trust can be fully achieved or deserved unless our users have full control over their data throughout the AI lifecycle.
In order to have this, Kin needs all of the above, and it needs more. Kin needs to have user datasets stored locally on the user's devices (and encrypted to their eyes only), and have the machine learning models powering Kin also stored on their devices as much as possible (an approach called Edge ML).
This not only leverages the untapped computing power in people's pockets, but reduces the number of stakeholders storing and processing our user's data (ideally to zero).
As a result, the use of AI becomes more secure and environmentally friendly by default, as there are no middlemen or power-hungry data centers required for these AI solutions.6
Kin is already doing this partially, and we're watching AI development for new technology that would allow us to do it more completely—technology like Fully Homomorphic Encryption, which we discuss more in our blogs on cybersecurity in AI.
With these three factors, at least the majority of a truly trustworthy AI system would exist.
In short, the AI would have human-centered measures in place to ensure its decision-making processes are fair and correct, be able to learn and respond in-depth about its users' individual history and preferences, and do all of that securely.
Really, these features would be helping to keep AI technology as a tool serving human values, rather than simply a profit-generating product.
Once AI reaches that level of robustness, the next hurdle would be breaking down stereotypes about the impact of AI, and actually convincing people that responsible AI is possible.
Building Trust in AI
Mostly, it's a mix of everything we've discussed so far. But, here are the three most important initiatives for building trust in artificial intelligence:
Teaching Efforts
As mentioned in the previous article, the basic explainability of AI would massively support people in learning to recognize trustworthy AI systems, and in what use cases they're appropriate.
Given their expertise in AI development, it makes sense that companies working on AI applications should be at least supporting the spread of technological literacy to dispel misconceptions about AI risk.
Microsoft, OpenAI, Google, and other stakeholders are already putting some effort into this, by providing free assessments and guidance material on their AI models, and AI technology in general. Kin's co-founder, Yngvi Karlson, even runs some AI ethics classes for Google. But the industry could be doing more with such initiatives.
Educational programs in schools and workplaces would help people understand how AI algorithms work, their potential benefits in areas like healthcare and robotics, and the need for proper risk management. As well as providing some basic technological literacy, these classes would also encourage responsible use of AI in more people, and further fuel innovation in its applications.
And, of course, this knowledge would not only begin to build a relationship between end users and AI systems, but would help them understand what trustworthy AI looks like, so they respect it more when they see it.7
Still, teaching is one section. People need ways to easily apply that knowledge to real-world applications.
Transparent Technology
As previously hinted at, the explainability of how AI operates and what training data it uses is crucial. If this interpretability isn't easily accessible without good reason, it can make people feel wary about AI risk—especially if they now have an understanding of AI governance provided by education.
This is exactly why Kin's memory feature shows users exactly what our machine learning systems know about them, and why the app comes pre-loaded with a 'reset' button to wipe all datasets if they so choose.
It's also why we're working on feature guides that explain what our AI algorithms do, and how best to use them for decision-making.
These are initiatives the AI development industry as a whole could benefit from implementing. Companies proving to stakeholders that they're operating ethical AI the way they say they are will only improve trust in their AI solutions.
And, that'd just be the start.
Talking Progression
Simply showing how AI systems operate won't be enough. The impact of AI needs to be discussed openly between companies developing AI, the European Union, other governing bodies, and the public about where they're planning to take AI technology in the long term, alongside current functionality and short-term improvements to robustness.
By being clear about where they plan to go with AI applications, companies can give their end users more input into that direction over the entire AI lifecycle. That way, feedback isn't just easier to include—it'll make the AI solutions even more trustworthy, by shaping them around human values as both product and consumer evolve over time.
Kin's beta is fulfilling that role right now, by sharing our long-term plans for responsible AI with users and allowing them to give feedback on everything from current metrics to future goals in real-time.
We're trying to champion trustworthy AI systems with Kin, and prove that ethical AI is possible right now, so people see the true potential of artificial intelligence, and other AI development teams follow suit.
Conclusion
With all of the above elements in play, the world gets put into a great position for building the guidelines discussed for trustworthy AI solutions.
People who understand the capabilities of the AI technology they're using, how its algorithms work, and how their datasets are involved will have a much easier time trusting AI applications. And, with a direct line to companies implementing AI and governing bodies, creating frameworks and regulations for risk management will be easier, too.
In short, organizations developing AI just need to be more transparent about their technology, and more open to stakeholder feedback, to become more trustworthy. That almost sounds too obvious.
Still, if that's how to build trust in artificial intelligence, what should be done once it's built? That's the question the next article in this series will cover.
Anon. 2023. Trust in artificial intelligence. kpmg.com. Available at: https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-in-artificial-intelligence.html [Accessed 11/06/24]
Kelly, J. 2024. “Google’s AI Recommended Adding Glue To Pizza And Other Misinformation—What Caused The Viral Blunders?” forbes.com. Available at: https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-glue-to-pizza-viral-blunders/ [Accessed 08/02/24]
Thomas, D. 2021. “Theranos scandal: Who is Elizabeth Holmes and why is she on trial?” bbc.co.uk. 30 Aug. Available at: https://www.bbc.co.uk/news/business-58336998 [Accessed 08/02/24]
Milmo, D. 2023. ‘Google chief warns AI could be harmful if deployed wrongly’. theguardian.com. Available at: https://www.theguardian.com/technology/2023/apr/17/google-chief-ai-harmful-sundar-pichai [Accessed 11/06/24]
Anon (2021). ‘NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems’. nist.gov. Available at: https://www.nist.gov/news-events/news/2021/05/nist-proposes-method-evaluating-user-trust-artificial-intelligence-systems [Accessed 11/06/24]
Subramani, D.; Araujo, S. 2022. “Demystifying machine learning at the edge through real use cases”. aws.amazon.com. Available at: https://aws.amazon.com/blogs/machine-learning/demystifying-machine-learning-at-the-edge-through-real-use-cases/ [08/03/24]
Thiebes, S.; Lins, S.; Sunyaev, A. 2020. “Trustworthy Artificial Intelligence”. Electronic Markets, 31(2), pp. 447–464. Available at: www.doi.org/10.1007/s12525-020-00441-4 [Accessed 07/12/24]