The previous article in this series discussed how a trustworthy AI system could actually be built, and what steps would need to be taken to ensure public and corporate trust alike in it remains strong.
Managing to build both these trustworthy AI systems, and trust in those artificial intelligence systems, will mark a significant milestone for the AI industry.
However, it also won’t be the end of the AI development lifecycle.
As AI systems become more integrated into daily life, from healthcare applications to facial recognition technology, it is likely that new challenges and responsibilities will emerge. Large Language Models (LLMs) like ChatGPT have already demonstrated how machine learning and automation can enhance productivity across ecosystems while wreaking havoc on data privacy, cybersecurity, and public perception.
It’s likely that future issues could be even more damaging to trust in the use of AI technology as it becomes further entwined with and trusted within human life.
Like all previous problems, these issues will require careful consideration from stakeholders including providers, policymakers, and end users. This article will explore what that might look like.
Wait, What Is Trustworthy AI?
Though trust and trustworthy AI have been covered throughout the earlier articles in this series, a quick summary of trustworthy AI systems would be helpful.
These responsible AI combine technical robustness with AI ethics, demonstrating consistent functionality, explainability in their decision-making processes, and strong safeguards— all while respecting personal data protection standards. Organizations like NIST are already hard at work providing guidelines on how this should be implemented, and those guidelines we discussed in previous articles.
Most importantly, trustworthy AI goes beyond simply addressing the "black box" nature of some AI models – it involves creating AI applications that can articulate their reasoning to both technical experts and everyday end users clearly. This transparency is what builds confidence in AI tools, and enables effective human oversight of training data and algorithms.
So, what challenges might face an AI system that meets the above criteria?
Over-Reliance: The Hidden Cost of Convenience
While generative AI can significantly enhance efficiency and productivity, there's a growing risk of over-reliance on these systems. There has been worry that when people become too dependent on AI tools for tasks they could previously handle independently, it may lead to a deterioration of basic skills and creative thinking abilities.1 Then, artificial intelligence would move from a helpful tool into an essential crutch.
The concern extends beyond simple automation. As AI models become more sophisticated in their ability to provide recommendations and insights, there's a risk that human judgment and critical thinking might become less needed, and thus less practiced.
This is particularly concerning in professional environments where maintaining and developing human expertise is crucial for long-term innovation and problem-solving.
The key lies in striking a balance between leveraging AI functionality while maintaining human agency and creativity.
Organizations need to implement guidelines that encourage the thoughtful use of AI tools while preserving opportunities for human learning and development. This might involve creating frameworks for when machine learning should be consulted, versus when human judgment should take precedence, or designing AI only to prioritise supporting human tasks over completing them.
Privacy in an AI-Integrated World
As AI technology merges deeper into the modern world, protecting personal data becomes increasingly crucial. The integration of AI applications into various aspects of life creates vulnerabilities that require robust cybersecurity measures.
This risk is particularly acute given the vast datasets that machine learning systems require to function effectively—reliance on those is susceptible to many attacks, from forced bias implantations (making the AI believe certain opinions over others) to simply removing the database connection and rendering the system inoperable.
The challenge extends beyond individual privacy concerns to broader societal implications. In a real-world context where AI systems can analyze vast amounts of training data, there's potential for this technology to be used to glean detailed insights about entire populations. This requires careful consideration of data privacy and ethical AI development practices, ideally codified into international law.
The Autonomy Question
The risk of AI systems gaining too much autonomy presents another significant challenge. As trust in AI tools grows, there's an increasing tendency to defer to AI-generated decisions without adequate oversight. This is particularly concerning given current limitations in algorithmic bias mitigation, and the need for human values and opinions in decision-making processes.
Even in relatively mundane workplace scenarios, excessive trust in Large Language Models could lead to problematic outcomes. Workers might begin to accept AI recommendations without question, potentially missing critical errors that human oversight would typically catch. Though it might sound far-fetched now, it’s possible and highlights the importance of maintaining human control throughout the AI lifecycle.
The Challenge of Bias
As alluded to, algorithmic bias remains a persistent challenge in AI development,2 stemming from both datasets and human-implemented safeguards. This issue threatens the trustworthiness of AI applications by potentially causing systems to misrepresent information. As artificial intelligence continues to influence how people learn and interact with information, these biases could contribute to misinformation on a massive scale, and could even be weaponized.
For example, both OpenAI’s ChatGPT3 and Meta’s Llama4 have been caught avoiding discussing former US President Donald Trump, though neither LLM had issue discussing his contemporaries at the respective times, such as US President Joe Biden, or US Vice-President Kamala Harris.5
While both companies claimed this was to “prevent misinformation”, such one-sided policies not only make AI seem biased and untrustworthy: but they actively make it so.
Addressing bias requires developing more representative training data, implementing robust mitigation strategies, and ensuring transparency in AI systems. Not just that, but maintaining it all with proper human oversight once it’s in place.
Regular auditing of AI tools for bias will therefore likely be crucial to identify and address emerging concerns in real-world applications, particularly in sensitive areas like facial recognition technology. Thankfully, ways to do this are already being developed.6
Workforce Evolution and Job Displacement
The expanded adoption of AI technology raises valid concerns about workforce automation. Beyond immediate social and economic impacts, there's broader concern about the potential degradation of human expertise—especially given its current use in the workplace.
If AI systems begin replacing workers entirely in certain use cases, it could create a negative feedback loop affecting the quality of future training data and overall ecosystem health. Similar to how there are fewer blacksmiths, carpenters, and weavers today, following the industrial revolution, a lack of human experts could eventually force AI to train itself—which can worsen the outputs of modern generative AI systems substantially.7
The solution lies in positioning artificial intelligence as a tool for augmenting human capabilities, rather than replacing them, and keeping it there. This approach allows organizations to leverage AI functionality while maintaining crucial human elements that drive innovation and creativity. AI tools can be most effectively used to automate routine tasks, freeing humans to focus on more complex, creative, and strategic work.
This, AI’s most basic and powerful strength, should not be forgotten nor lost.
Preventing Abuse and Misuse
As AI capabilities expand, the potential for misuse grows proportionally. This is particularly concerning when commercial interests might be made to override AI ethics. The technology's versatility means it must be subject to careful oversight and restriction throughout its lifecycle, with providers and policymakers working together to ensure responsible use.
Maintaining trust requires an ongoing commitment to ethical AI development. This includes implementing robust cybersecurity measures, maintaining explainability about AI models' capabilities and limitations, and establishing clear guidelines for responsible use and full interpretability of artificial intelligence.
The moment these factors disappear from easy public access is likely to be the moment trust disappears from the AI industry.
Preparing for an AI-Integrated Future
Organizations and individuals will also benefit from taking proactive steps to prepare for increased AI integration. This includes developing skills that complement AI capabilities, understanding basic machine learning concepts, and staying informed about AI development in their fields.
To support this, organisations should invest in AI literacy programs, develop clear usage policies, and create frameworks for ethical AI deployment. An educated population will be much better at recognising and utilising both untrustworthy and trustworthy AI safely, in a world where the societal adaptation to trusted AI has taken place.
Merging Artificial Intelligence with Societal Frameworks
In the face of this, it is also likely that trustworthy AI systems will be integrated into major societal frameworks like governments, banks, and militaries—all while hopefully while maintaining human agency. This integration will require developing new governance structures, educational systems, and social protocols that can adapt to rapidly evolving AI technology.
This doesn't mean AI "taking over," but rather the emergence of a new societal structure where artificial intelligence enhances human capabilities across all sectors. Healthcare providers will work alongside AI diagnostic tools, educators will utilize AI applications to personalize learning, and creative professionals will leverage generative AI to expand their artistic possibilities.
We’ve discussed the importance of continuous learning before—and for good reason. Without properly responsive educational and trust-building facilities in place, there are many potential risks for the abuse and misuse of this AI integration, if it even happens at all.
The key to this lies in maintaining AI flexibility while preserving human agency. As machine learning capabilities expand, society must continually teach itself how to consistently reassess and adjust its relationship with AI tools, ensuring that technological advancement serves rather than supersedes human values and interests. This will involve the regular evaluation of AI applications' impact on social structures, economic systems, and human relationships—the groundwork for which was laid when trustworthy AI was first developed, in modern-day things like the EU’s AI Act.
How Kin is Preparing to Address This
Though it might seem early, we’re already thinking this far ahead. Kin is more than a personal AI for us; it’s a push toward what trustworthy AI could be and could do.
Kin is designed to learn about its users in order to help them make better decisions—not make those decisions for them. It’s about growing skills, not outsourcing them. And it’s about doing all of that with an AI system that can be trusted.
Kin’s fully transparent and controllable data use, its reliance on open-source AI systems, our effort to make it easily explainable, blogs like these, and our presence in AI education are all building toward that.
We are making Kin to serve as an example proving that AI can be transparent, trustworthy, and still effective—and we hope that comes across.
Conclusion: Building a Sustainable Future
The path ahead of the future still requires a careful balance between innovation and responsibility. As AI systems continue to evolve and integrate more deeply into society, maintaining trust will require increasing and ongoing commitment to ethical AI practices, transparency, and human-centric development approaches.
This includes regular assessment of AI impact, continuous updating of security measures, and maintaining focus and education on using AI to enhance, rather than replace, human capabilities.
To do this, providers, policymakers, and end users will rely on working together using the tools they’re building right now to make trustworthy AI attainable. Similarly, organizations must become and remain committed to responsible AI development, while everyone would do better to stay informed about how artificial intelligence is being implemented in their ecosystem.
Through these careful considerations and structured approaches, society can move forward into an AI-integrated future that enhances human potential while preserving the essential elements of human experience and decision-making. It will call for nothing short of unwavering commitment to ethical AI development and deployment—and that’s something only reached through proof, education, and legislation.
Kostick-Quenet, K. M.;Gerke, S. 2022. “AI in the hands of imperfect users”. NPJ digital medicine, 5(1), 197. Available at: https://doi.org/10.1038/s41746-022-00737-z [Accessed 07/12/24]
Varsha, P.S. 2023. “How can we manage biases in artificial intelligence systems – a systematic literature review”. International Journal of Information Management Data Insights, 3(1), p.100165. Available at: doi.org/10.1016/j.jjimei.2023.100165 [Accessed 07/12/24]
Johnson, A. 2023. “Is ChatGPT Partisan? Poems About Trump And Biden Raise Questions About The AI Bot’s Bias—Here’s What Experts Think”. forbes.com. Available at: https://www.forbes.com/sites/ariannajohnson/2023/02/03/is-chatgpt-partisan-poems-about-trump-and-biden-raise-questions-about-the-ai-bots-bias-heres-what-experts-think/ [Accessed 08/02/24]
Tangermann, V. 2024. Meta’s AI Says Trump Wasn’t Shot. Yahoo News. July 31st. Available at: https://uk.news.yahoo.com/metas-ai-says-trump-wasnt-143317648.html [Accessed 08/02/24]
Zilber, A. 2024. “Meta’s AI assistant calls Trump assassination attempt ‘fictional’”. nypost.com. Available at: https://nypost.com/2024/07/29/business/metas-ai-assistant-calls-trump-assassination-attempt-fictional/ [Accessed 08/02/24]
Varsha, P. 2023. “How can we manage biases in artificial intelligence systems – A systematic literature review”. International Journal of Information Management Data Insights, 3(1), p.100165. Available at: https://doi.org/10.1016/j.jjimei.2023.100165 [Accessed 08/02/2024]
Rao, R. 2023. “AI-Generated Data Can Poison Future AI Models”. Scientific American. Available at: https://www.scientificamerican.com/article/ai-generated-data-can-poison-future-ai-models/ [07/12/24]