If We Take On Personal AI Assistants, We Should Do It Right This Time
by Pernille Tranberg, author, journalist and independent advisor in data and AI Ethics. Also published on DataEthics.eu
There are no limits to the positive aspects of personal AI that we will all get sooner or later, according to AI advocates. A personal AI is a digital service based on artificial intelligence that knows you so well, that it can do many tasks for you tailored exactly to you. Listen closely to the marketing-hyped promises:
It can boost your productivity by handling tasks like scheduling, email answers, and reminders. It can make updates on your social media accounts and comment on your connections' updates to nurture your network. It can offer you personalised recommendations based on your preferences and habits on everything from traveling to what to put into your fridge and mouth. It can help with your budget and financial struggles and make you more financially effective. It can also help you speak to others in a language you don't master by live translation. Further, you can monitor your health metrics and get reminders when to take your medicine or vitamins. It can real-time monitor your kid or your grandfather if you want to get into that. If you have a disability, it can provide you with eg voice-activated services.
Are you out of breath? If not, you will be, as all these services will probably make you, me and the rest of the human race even lazier and more convenience-seeking than we already are. I am truly in doubt about whether I want these personalised services. But if I chose some of them, they have to be done in a different way than the majority of AI advocates try to sell us today.
Data Vampires
AI services need a lot of personal data to 'serve' you as described above. It needs access to your calendar, your email, and your social media accounts. It will want to know your browsing, traveling, and shopping habits, your salary, your pension funds, your economy in general, your health status and needs, and a lot more. Do you want to give those data to Microsoft, OpenAI, Google, Meta, Apple, or Amazon who are sitting on most AI today? They are all building personal AI assistants or are integrating AI into their existing ones like Google Assistant, Amazon's Alexa, or Microsoft's Cortana. Most people will probably give in and use big tech's AI services, just like they've already paid for their services with their data when it comes to social media, search, and chatbots.
I don't want to pay with my data. I want to know the value and the price I am paying for a service, and I won't trust the tech giants with my personal data.
The Ethical Personal AI Model
I want a different model for my personal AI. My dream of a personal AI assistant is a service that I - and only I - control. I sit on all my data, I decide when and how to integrate and enrich my data. I decide if I give my AI access to my calendar if I import my health journal or my LinkedIn data. I decide if I want to enrich my bank account data with my data from the taxing authority.
A lot of my data will still be all over. But spread around. LinkedIn has some. My employer has some. My supermarket has some. Apples have some. The health authorities have a lot. WhereToGo has my routes. Facebook has no more, as I asked for all my data via my portability rights in GDPR before I deleted my Facebook account. Thus, I have a copy of my Facebook data. Just as I can get a copy of my data from the supermarket, Apple, LinkedIn, my employer etc.
Nobody else but me will have access to all my data. I sit on the assembled data jigsaw puzzle of me and my life and I decide who I want to share them with. Of course, I will share some my data with trusted services, as long as I can share them in a safe environment.
Some of the Services
There are services out there that seem to be living up to my demands. One is the Danish start-up Kin or My Kin. They are building a personal AI assistant to which I will be able to enrich and integrate my data on my device and only I have access. The people behind Kin, which is out for pilot testing, do not have access to my data. Their business model will be that you pay for the service with money, not data. They tell me that they plan to use “Trusted Execution Environments” and have external audits, so the user will know that they do what they say.
I am sure that other start-ups are inventing something similar to Kin. I just don't know them, so let me know if you do.
For the sharing of my data with others I also want a model, where I and only I am in control of my data. And guess what, I know a service. The Danish Data for Good Foundation, DfG, is working just on that. I have known the foundation for years, and finally they are getting tail wind. As a data intermediary defined in the relatively new Data Governance Act in the EU, it can provide a safe sharing environment based on multi-party computation which means the data is encrypted. DfG is working with EU funds to get its business model up and running.
Hopefully, personal AIs in the future can be done in a way, where neither the state nor big tech are in control.
Only you are.
In a true data-democratic way.
About Pernille Tranberg
LinkedIn
Co-founder of the European think-do-tank Dataethics.eu and appointed by Ministry of Industry, Business and Financial Affairs as a Member of Expert Group on Tech Giants.
Pernille Tranberg is a distinguished data ethics advisor renowned for her expertise in data democracy, data ethics, data literacy, and the ethical use of personal data across businesses, organizations, and governmental bodies.
She has presented at events such as TEDx, MyData, and SXSW and is the author of eight books, the latest being "A Data Democracy comes with Individual Data Control" (2021). Additionally, among others, "Data Ethics – The New Competitive Edge" (2016), and "Fake It" (2012) about big data and digital self-defense.