AI is Driving Growth in Cloud Usage, But Concerns About Privacy Persist
Artificial Intelligence (AI) can access large pools of data and make logical inferences from billions of pages of information. Therefore, it can solve complex problems and can be leveraged to enhance business outcomes.
As a result, cloud providers have seen double-digit growth in traffic since the release of ChatGPT, Gemini, Claude, and other GenAI tools. Amazon Web Services, for example, reported an increase of 13%, and Alphabet saw an increase of 26% in its cloud unit, year-over-year. Similarly, Microsoft said its Azure cloud business grew 30% and credited 6 percentage points of its growth directly to the increase in demand for AI.
“The uptick in AI usage on the cloud is in large part thanks to enterprise customers testing use cases,” said Stefan Slowinski, global head of software research at investment bank BNP Paribas Exane. However, financial companies, healthcare institutions, and government agencies, among others, are still waiting to see whether crucial privacy concerns can be resolved before fully investing in AI.
One of those concerns is that every time a user interacts with an Artificial Intelligence tool, the information from that interaction is automatically recorded into the machine learning functionality, exposing private data to the AI learning model and the hosting company. Currently, there is no way to delete or prevent this from happening. Slowinski points out that the risk for the hyperscalers who host AI models is that too few use cases (for AI) make it past the pilot phase because they are not able to develop clear enough safety controls.
Fully Homomorphic Encryption Creates Private LLM
For AI to fulfill its potential, proprietary and sensitive information must be secured. The most promising innovation under development to secure private data is Fully Homomorphic Encryption. FHE allows AI applications to perform computation and analysis exclusively on encrypted data, so that it is never exposed to the learning module.
As an emerging leader in Privacy Enhancing Technologies, Chain Reaction is at the forefront of the race to design and produce 3PU™, a revolutionary processor that will enable real-time processing of Fully Homomorphic Encryption. This technology will enable AI to process data without compromising privacy, creating a Private LLM (Large Language Model). This would finally enable corporate entities, public institutions, and hyperscalers to embrace the full use of AI, confident that their proprietary code and sensitive information remain secure and anonymous.