UN Report Shows Why Real-Time Encryption Matters for Ethical AI

The United Nations recently released its Governing AI for Humanity report, outlining its vision for responsible, fair, and globally beneficial AI governance. The report underscores the need for AI that respects human rights, ethical standards, and international law, protecting vulnerable groups and ensuring that AI development benefits all of humanity. It advocates for a global, inclusive approach to AI governance to prevent power concentration, bridge digital divides, and incorporate diverse perspectives, especially from underrepresented regions.

Data Privacy in AI

One of the primary areas of interest in the report is that of governance with regard to privacy and data security. This includes a call for an international framework for AI training data that promotes privacy and interoperability across jurisdictions, supporting transparency, accountability and set standards for data ownership and use.

The report warns against potential exploitation of data in competitive markets and advocates for a “race to the top” in which governments, corporations, and public trusts collaborate to empower AI through ethical data usage. It encourages international standards to prevent a decline in privacy protections across regions due to competitive pressures.

Perhaps most importantly, the report recommends adopting privacy-enhancing technologies (PETs) to enable secure data processing without compromising individual privacy.  In so doing, the UN recognizes how critical it is to enable secure, encrypted data processing to prevent misuse and build public trust in AI systems, enabling them grow responsibly and securely, thereby providing even greater benefit.

FHE Unlocks AI’s Potential

A dedicated privacy processor using Fully Homomorphic Encryption (FHE), such as Chain Reaction’s 3PU™, could significantly enhance trust in AI systems. By enabling the analysis of encrypted data without decryption, it ensures sensitive information remains secure throughout processing. This prevents data misuse by eliminating exposure of plain text data during analysis. This layer of protection is valuable for all private data, and especially for vulnerable groups who are at a higher risk of data exploitation.

With FHE-based processing, governments, organizations, companies and more can unlock AI’s full potential while  maintaining individuals’ control over their personal information, aligning with the report’s emphasis on privacy-preserving AI governance. This technology would make it possible to utilize AI for social good without compromising privacy, accelerating the UN’s race to the top and achieving the goal of responsible, ethical, and globally beneficial AI.

Medical Privacy in the Search for a Cure for Cancer

In today’s digital landscape, privacy has become a top concern. With data breaches making headlines, the fear of losing control over our personal information is stronger than ever. The recent US Healthcare data breach, affecting 100 million people, is a daunting example of our vulnerability: our medical records, passwords, financial data, and identity – are all at risk.

This is even more true when it comes to protecting our medical and biometric data, perhaps because unlike financial information, which can be protected or, if necessary, reset, one’s biometric data is indelible and permanent, making its protection vital. While governments have taken massive steps forward to ensure our medical privacy through regulation, such efforts do not address the technological advancements that threaten our personal data.

The other side of that technology offers promise for a better world. The combination of high-performance computing and artificial intelligence (AI) has advanced analytical capabilities to enable solutions and treatments for many of today’s most difficult medical challenges. The biggest impediment to such analytical breakthroughs is a lack of real-world data.

The Promise of AI for Healthcare Innovation

AI has the potential to revolutionize healthcare, but it needs vast amounts of data to uncover patterns that indicate diseases early in the process. Given enough information, AI can identify subtle warning signs for cancer or heart disease. For instance, if AI analytics are applied to isolate a pattern of test results that indicates early onset of heart disease, researchers can then watch for that pattern and possibly provide preventative medications to slow or even avoid that onset.

In another example, Google uses AI to analyze DNA samples for specific genomic patterns that indicate a likelihood of developing a certain type of cancer. The NIH uses AI to analyze DNA to link the best medications to the patient’s specific genetic makeup for optimal results. With enough data to work with, it won’t be long before AI could very well identify an actual cure for many cancers.

This leads to a critical question: how can we harness the power of AI for medical advancements without compromising our privacy? To fuel these advancements, AI requires vast amounts of real-world data. But sharing medical information creates an immense risk to our privacy. A new technology is needed – one that protects personal data while enabling incredible medical innovations.

The Real Danger to Our Data Privacy

The single greatest threat to our privacy is data breaches on unencrypted data. While hackers are adept at accessing secure servers, they are mostly powerless against encryption, which keep the data within those servers securely protected. Data breaches and leaks are problematic because data was either stored, transferred, or processed while unencrypted.

While the solution might seem as simple as applying encryption to data at all stages (at rest, in transit, in use), the truth is that today it is impossible to process encrypted data at scale.  Therefore, any medical data that you share for processing, whether with a lab for analysis, an insurance company to receive a quote, or an academic institute for medical research, is highly likely to be unencrypted at some point to enable it to be processed.

The Case for Fully-Homomorphic Encryption

One technology that has been developed to address exactly this issue is Fully-Homomorphic Encryption (FHE). FHE is a cryptographic operation that enables data to be processed without it ever needing to be unencrypted, comparable to a locked vault in which data can be analyzed but not accessed. That means that the privacy of the data is preserved no matter who uses it and no matter what they are using it for.

Suppose, for instance, that an AI machine learning model needs millions of patient records to accurately identify the genetic patterns of a certain type of cancer. Today, providing such information would require patient consent, reducing the likelihood of acquiring enough records for an accurate analysis. It would also be impossible today, using traditional encryption methods, to access and analyze these records without having to decrypt them. That would mean that patient identities and medical records are vulnerable.

With FHE, though, data remains encrypted at all times and at every stage, eliminating the need for the cumbersome patient consent process. Encrypted medical information, even if it does end up in the wrong hands, is gibberish unless it is decrypted. The benefit is twofold: it accelerates medical innovation by providing secure access to valuable data while ensuring that patient privacy is maintained.

The Challenge of Applying FHE at Scale

While FHE already exists and can be applied on a small scale for specific use cases, it cannot yet be implemented at the scale required to enable such medical breakthroughs. This is because there is significant computational overhead involved in the complex cryptographic calculations that are required for processing encrypted data. It is estimated that applying FHE to a cloud-based AI model for analyzing millions of records would require nearly a million times the processing power of today’s processors.

That is why much effort is being made by both government-funded projects, such as DARPA’s DPRIVE, and private corporate efforts, such as  Chain Reaction’s 3PU™ privacy processor, to develop a hardware-based accelerator that can implement FHE at scale. Once the processor exists to overcome the computational overhead, the possibilities for privacy-safe medical advancement are endless.

Once FHE is adopted, all our privacy concerns will be assuaged. Our personal data will remain encrypted at all times, unable to be accessed even if it were to be hacked or leaked. And just as importantly, by removing the privacy hurdle, we will open a new world of medical research and innovation, enabling us to live both healthier and more securely.

Biometrics Privacy in the Cloud Era, Part 2

Biometrics continue to be a hot news topic, especially as the technology penetrates ever-further into our lives and privacy concerns escalate. As we discussed in Part 1 of this two-part series of tech shorts, biometrics has moved beyond our smartphones and into the cloud, raising questions about the ability of the various applications to safeguard our personal data when it is (necessarily) unencrypted while being processed.

The growing prevalence of biometrics has led to concerns about their use in payment apps (such as Amazon One) and travel authentications (such as the TSA’s Touchless Identity Solution). Another recent story involves the questionable use of biometrics for security purposes at sports arenas. But the next generation of biometrics goes even deeper into our private makeup, demanding a technological solution to protect our personal data.

Next-Gen Biometrics Challenge Privacy

For example, the recent collapse of the popular DNA testing company 23andMe has brought to the forefront the issue of who owns our private data, in this case our genetic code. Even before its financial problems, the company faced a massive data breach, failing in its responsibility to safeguard sensitive data, and its imminent dissolution then led to accusations that it was selling off DNA data to the highest bidder in order to stay afloat. The recent news of the company’s bankruptcy has also caused speculation that the data will be used to pay off debt. . The recent news of the company’s bankruptcy has also led to speculation that the data will be used to pay off debt.

Another recent example is that of Worldcoin. The founder of OpenAI, Sam Altman, is trying to implement a groundbreaking way of replacing physical identification (eventually overcoming the scourge of AI bots and fake identities) by creating a worldwide database of retinal scans.  To incentivize people to participate, each volunteer is rewarded with 25 Worldcoins. Worldcoin is an Ethereum-based cryptocurrency that currently has a market value of about $2 US.

While this project is revolutionary, the question of ownership of the scanned data, the ramifications of a breach of security, and the legality of potentially profiting from the collection of biometric data all must be addressed.

FHE Can Overcome the Concerns

One way to ensure that private data remains private is through Fully Homomorphic Encryption (FHE). FHE allows data to be processed while it remains encrypted, such that ownership is no longer as much of an issue. Whether the data remains with the original collector, is sold elsewhere, or even is hacked, it remains always encrypted and, while the data can be processed and used, it cannot be accessed to violate the privacy of the individual who supplied it.

Certainly, there are still issues with biometrics that must be addressed through legislation and regulation, and in many cases, courts will determine whether these companies are acting within the bounds of fair play. But at least with FHE we can rest assured that our personal biometric data will be kept out of the hands of nefarious individuals and our privacy will be secured.

 

Biometrics Privacy in the Cloud Era, Part 1

By 2022, 81% of smartphones were equipped with biometric scanning, highlighting the convenience such technology offers users. Native biometric systems provide a high level of security by leveraging algorithms and hardware to authenticate users, ensuring that sensitive data stays secure from hacks and leaks within the device. Beyond simply unlocking a phone, biometric authentication also enables quick and seamless access to mobile applications, and this enhanced user experience continues to drive the growing adoption of biometric technology.

However, the next generation of biometric applications is expanding beyond smartphones into the cloud, raising new concerns about biometrics privacy and security. Cloud-based data, no matter how reputable the company or government organization that is handling it might be, is vulnerable, because unencrypted data must be accessed from storage to process and match biometric information.

Next-Gen Biometric Apps in Action

Amazon’s One app takes a photo of a person’s palms, converts them into a digital signature, and stores them in its cloud. The app user can then pay for groceries at Whole Foods without a credit card or phone, simply by hovering his or her hand over a sensor at the checkout. A similar option using face scanning is being implemented by JPMorgan Chase for payments at the Whataburger restaurant chain.

Biometrics are not only being used for retail. The TSA has started scanning people’s faces instead of their passports. Travelers who have registered for the Touchless Identity Solution and added their ID photo to the TSA’s cloud-based Travel Verification Service will benefit from a streamlined security process that results in shorter lines and less friction. In another instance of biometric identification, MasterCard has begun offering facial recognition as an option for accessing its user accounts in place of passwords.

The Threat to Privacy

These applications expand biometric adoptions to hundreds of million users, keeping all that sensitive information in the cloud. While considerate precautions are made to secure it, the data still must be decrypted before it can be processed, making it vulnerable. There have already been examples of biometrics privacy breaches, which are especially threatening because they include vital information that cannot be replaced or changed, such as full name, date of birth, or height and weight.

The Answer is FHE

There is, however, a viable solution to the biometrics privacy issue on the horizon. Fully Homographic Encryption (FHE) is a technology that will change the paradigm and enable processing directly on encrypted data, thereby ensuring that biometric information is kept private, even in the cloud.

Chain Reaction is at the forefront of the race to design and produce a processor that will enable real-time processing of FHE at cloud scale. This cutting-edge technology will usher Amazon, the TSA, JP Morgan, and many others into a new era of privacy-preserving applications, data collaboration, and security, while enabling all the benefits of biometric scanning.

To read Part 2 of this two-part series on Biometrics Privacy, click here.

Another of the Tech Giants Joins the FHE Bandwagon

At the beginning of August, Apple announced that it was releasing its new “Swift-Homomorphic-Encryption”, an open-source package to empower developers and researchers to create privacy-preserving applications within the Apple ecosystem. This announcement makes Apple the latest of the tech giants to indicate their commitment to privacy by making Fully Homomorphic Encryption (FHE) tools available to the public. 

Even more significantly, the release of such tools indicates recognition among the tech leaders that FHE is a promising privacy-enhancing technology (PET), enabling processing on data while it remains encrypted. Let’s explore what tools are now available from the tech giants, as they hop aboard the FHE bandwagon. 

Software Libraries and Toolkits 

A software library contains programming code in a variety of languages that developers can use to implement FHE within their applications. Some libraries also include basic programs or toolkits which help developers streamline their coding process and avoid rebuilding modules from scratch.  

Tech giants have an incentive to release their own libraries of code optimized for their specific programming or application environments.  This code can run on different hardware such as CPUs, GPUs, TPUs (for AI), allowing software engineers to test new optimizations, compare benchmarks, and assess how well their code enhances privacy across different platforms, while remaining within the tech giant’s ecosystem. 

FHE Libraries: What the Tech Giants Offer 

Google has released its Jaxite cryptographic software library, which was originally developed to accelerate neural network computations and was found to be as effective in accelerating FHE computation. Google is also developing HEIR (Homomorphic Encryption Intermediate Representation) to enable interoperability of FHE programs within its programming environment. 

Microsoft has offered SEAL (Simple Encrypted Arithmetic Library) since 2015 to apply FHE to basic arithmetic operations, but with ongoing research leading to more complex development, the company is hoping to apply SEAL to more advanced applications. 

Intel offers a toolkit that uses various libraries (including Microsoft SEAL) for implementing Homomorphic Encryption in the Intel architecture. 

And, as mentioned, Apple has now joined this list with their Swift-Homomorphic-Encryption. Apple’s offering is not as powerful or scalable as Fully Homomorphic Encryption in that it does not perform bootstrapping. It is therefore less compute intensive, but easier to implement. 

Compilers 

Several tech giants also offer translators and compilers that convert high-level FHE code into optimized applications capable of operating on encrypted (ciphertext) data. These tools automate much of the process, helping developers build efficient and accurate FHE applications for various tasks. 

FHE Compilers: What’s Available from the Tech Giants 

Companies that offer FHE compilers include Google, whose FHE Transpiler converts C++ programs and TensorFlow machine learning models; Microsoft, whose CHET (Compiler and Runtime for Homomorphic Evaluation of Tensor Programs) implements FHE in Tensor neural network inference tasks; and Amazon, which offers an FHE compiler within SageMaker to enable inference endpoints to operate on encrypted data and generate encrypted results. 

Real World Applications 

The tech giants have not only made FHE accessible for programmers and developers but have also made basic applications for public use, incorporating FHE or similar forms of homomorphic encryption. 

Tech Giant FHE Applications 

IBM, which made the initial breakthrough with developing FHE, offers an online demo of FHE for performing secure AI and Machine Learning analytics on encrypted data, but such a demo must be scheduled through IBM’s cybersecurity consulting services. 

Apple uses its Swift HE in the CallerID Lookup application in iOS, keeping both the query and the result private even from the system.  

Microsoft has used the code in its SEAL library to process basic FitBit and AppleHealth data (such as total runs, distance run, and time into metrics such as average speed) while all data (including private user info) remains encrypted. Microsoft Edge has a Password Monitor feature that compares a user’s passwords privately against a database of known compromised passwords, which, by using FHE to conduct the check, keeps the user’s passwords private from Microsoft or any other party while they are being monitored. 

Finally, while they are not yet publicly available, Intel has been working with NASDAQ to implement AI-based fraud detection and anti-money laundering applications using FHE calculations. 

Software and Hardware Development 

It is not only the tech giants who are driving the development of FHE. Companies like Zama, Duality, and Fhenix are offering FHE solutions to protect private data on a per company basis, typically securing data equivalent to that of a single server rack through software. While this approach may not yet scale to the size of cloud or AI data centers, it represents a significant step forward in enhancing data privacy. 

One reason why FHE has not been adopted more comprehensively already is the massive increase in complexity for some operations. The breakthrough for FHE will be the development of a dedicated FHE hardware processor that can accelerate FHE computation enough that it can overcome such complexity while scaling for cloud and AI deployments. Efforts toward this have been ongoing for many years, with the most well-known project sponsored by DARPA (Defense Advanced Research Project Agency) of the US Department of Defense – the DPRIVE (Data Protection in Virtual Environments) initiative.  

Microsoft, Intel, and notable others have been working on an FHE hardware accelerator for the past three years as part of the DPRIVE initiative. DPRIVE seeks to enable FHE computation within a factor of ten of unencrypted computations such that data will be secure in all states across DoD and commercial applications. DPRIVE’s goal was to successfully accelerate FHE computation by 10,000 times the processing of a standard CPU, but to truly achieve cloud and AI scale, the acceleration will need to approach 100,000 times a CPU’s capability. 

There are also private semiconductor companies that are working on such a processor, including Cornami, Optalysys, Niobium, and Chain Reaction, which is developing its 3PU™ Privacy Processor. Chain Reaction’s core competency in chip design enables it to accelerate FHE computation with the goal of achieving cloud and AI scale.  

FHE: The Holy Grail of Privacy 

The fact that so much development effort is being placed into FHE, especially by the biggest companies in the tech world, gives credence to the idea that FHE is the “holy grail” of privacy. It is the only post-quantum secure technology for protecting our privacy, and as AI and cloud computing gain a foothold and expand to ever-more industries, only FHE can be counted on to protect our personal data. 

Apple’s latest announcement is further evidence that FHE is no longer merely a future technology, as there are libraries, compilers, basic applications, and limited software solutions that can be used right now for specific tasks and corporate uses. 

However, only once a dedicated processor, such as Chain Reaction’s 3PU™, is developed will FHE processing be accelerated to make it ubiquitous throughout the cloud and AI architectures. At that point, FHE will reach its full potential and become the game-changing technology that the tech giants are betting on it to be. 

Effective Performance: Maximizing Bitcoin Mining Profitability

The Bitcoin mining industry has traditionally been defined by performance, a term that consists primarily of hashrate and power efficiency. Whether a mining company is successful is almost exclusively a factor of whether they are able to extract maximum hashing from their rigs and whether they are able to do so while consuming as little power as possible.

While this is the key metric, the rated performance of a specific machine is not necessarily reflected in the miner’s overall profitability because many additional factors come into account. For example, two of the most important factors to a mining data center’s bottom line are uptime and operational flexibility.

Especially since the halving event this past April, the added difficulty of mining for Bitcoin has made profitability that much more complex an equation for miners, who must pay close attention to the performance of their entire facility. Hashrate and power efficiency become two variables among many when considering which rigs to buy to constitute the makeup of their data centers.

Therefore, it is necessary to define a metric that measures the hashrate and efficiency after taking into account all the external factors that can adversely affect performance. Such an evaluation would focus on a data center’s effective performance.

Factoring Uptime into Effective Performance

Uptime is easy to understand as a critical factor in the profitability of a data center. Every second that a system is down is an opportunity lost. This has always been a factor for mining companies, but now, mining difficulty has made it such that the poor uptime of the incumbent system providers cannot be ignored.

Among the difficulties that miners face that directly affect uptime are rigs that shut down or malfunction at high ambient temperatures, dead-on-arrival rate, rigs that break down within the first few months of operation, high maintenance response time, and time lost on reboots. An excellent hashrate is highly attractive, but mining companies know that hashrate is meaningless while the mining rig is down.

Thus, by focusing on maintaining uptime, a miner’s effective performance is much greater, and the mining data center is much more profitable.

Adding Efficiency Through Operational Flexibility

Another major factor in today’s mining industry is operational flexibility.  A miner’s ability to design a data center with built-in agility provides them the opportunity to improve hashing density, upgrade existing systems (instead of replacing them), and incorporate custom infrastructure using de facto form factors, without needing to overhaul existing deployments. This introduces efficiency, and therefore new profitability, into the data center, leveraging the unique knowledge and experience that mining companies possess to incorporate out-of-the-box planning to overcome the deficiencies of older generations of miners.

Another aspect of operational flexibility can be applied to curtailment, the highly lucrative practice of selling energy back to the grid at times of peak need.  Curtailment has become a prominent strategy toward adding revenue or offsetting power costs, but it has its downsides. For example, turning on and off miners can reduce their reliability, create imbalance in the data center, and limit the opportunity to participate in more aggressive response-time curtailment programs. But curtailment does not need to be an all-or-nothing proposition. With built-in flexibility in the mining systems, it is possible to take advantage of curtailment opportunities while continuing to hash at a lower rate.

When miners incorporate operational flexibility into their data centers, they maximize their effective performance and significantly enhance their chances at profitability, even in the post-halving era.

Maximize Your Mining

As the rated performance of a mining system in terms of hashrate and power efficiency does not reflect the data center’s ability to turn a profit, large-scale bitcoin miners seek mining solutions that emphasize effective performance.

For example, Chain Reaction’s EL3CTRUM Bitcoin miner is designed to address the factors that maximize effective mining performance. EL3CTRUM was conceived based on input and guidance from miners, with a focus on optimizing reliability and resilience toward enhancing uptime, and introducing operational and data center design flexibility by offering ASICs, hashboards, and systems. The result is improved profitability and reduced total cost of ownership.

There is a lot more to surviving in today’s market than just buying the rigs with the highest hashrate and lowest power efficiency. In today’s fast-paced, competitive mining industry, the most successful miners are those who use their expertise and experience to take a holistic view of their mining operations, and who ensure that maximum uptime and flexibility are factors toward optimizing the effective performance of their data centers.