2 Comments

> 'ensuring that sensitive information is accessible only to authorized entities and processes within KIN'

I'm sorry but I don't quite follow how this is distinct from the LinkedIn privacy policy you started with as a counterexample. If you were served with a subpoena, would you be able to comply and provide the user's notes or not? If not, how does the cloud-based LLM processing actually work in a provably trusted way?

Expand full comment

Thanks for the good questions!

> I'm sorry but I don't quite follow how this is distinct from the LinkedIn privacy policy you started with as a counterexample.If you were served with a subpoena, would you be able to comply and provide the user's notes or not?

That formulation in that paragraph is maybe not so clear, thanks for bringing that to our extension. When it comes to the users data, it only lives on the users device. Therefore, if we were served a subpoena, there would be no way we could handover the users data. If/when some of this will be stored in the cloud (eg. backups/sync), this data will be encrypted with the users key from their device such that we will be unable to decrypt it.

> If not, how does the cloud-based LLM processing actually work in a provably trusted way?

While we try to do as much as we can on-device, we still leverage some largers LLMs in the cloud. At the moment, we are using Azure's OpenAI services for that, but are currently in a process of migrating fully to open-source models. When we succeed in that migration, we will be able to host those models in more trusted and secure environments, such as the Trusted Execution Environments described in this article series. With TEEs you have what is called attestations, which is a proof of the integrity of the closed-off environment.

Hope that answers your questions!

Expand full comment