Intro to Privacy Tech, Part 3

In this series of posts, we first described the data privacy issue and privacy tech, and then why many businesses cannot migrate to the cloud. In this post, we’ll tackle the privacy issue from the point of view of hardware.

Privacy tech has given us solutions to safely transfer and store data, but is still working on an ideal way to process data. As will be described, the real-time data processing challenge can only be accomplished using hardware.

Trusted Execution Environment

One way to process data securely is to move sensitive data into a secure compartment where operations are performed on the data, which is known as a Trusted Execution Environment (TEE). TEEs are included as sub-blocks of some CPUs, such as Apple’s Secure Enclave (found in any iPhone 5s or newer) or Intel’s SGX. CPUs with this feature can offload sensitive data processing to the TEE.

You can think of this technology as storing your cash in your wallet versus putting it in a safe – both can be stolen, but it’s harder to steal money from a safe.

While TEEs can provide a higher level of security, they do not have the throughput to process large amounts of data, and are still inherently not secure since they decrypt the data before processing.

Why Hardware?

Some of the challenges in privacy tech have been solved with improvements in software, such as new algorithms for encryption. More complex problems, such as real time encrypted data processing, can only be solved using specialized hardware.

When we say software, we mean programs that run on a CPU (Central Processing Unit). CPUs are general purpose processors meant to handle a wide variety of applications, which means that for specific applications, such as graphics, they may not provide enough compute power. This situation led to the creation of specialized chips, called GPUs (Graphics Processing Unit). Since some of the needs for encryption overlap with the needs for graphics, GPUs have also been used to accelerate tasks such as Bitcoin mining.

When it comes to Privacy Enhancing Technologies (PETs), such as fully homomorphic encryption (FHE), we need more specialized chips. GPUs just weren’t built to handle the operations used in these complex cryptographic algorithms.

ASICs to the Rescue

Application Specific Integrated Circuits (ASICs) are chips that implement a specific task or set of tasks and are optimized for the performance and efficiency needs of those tasks.

To achieve significant acceleration of PETs, such as FHE and ZKP, we need an architecture customized for these specific algorithms, one that is optimized to run mathematical operations, such as NTTs (Number Theoretic Transformations) and modular multiplication, that are the building blocks of FHE/ZKP.

ASICs are the ideal technology for a hardware implementation of FHE/ZKP, since only ASICs can provide the performance and efficiency needed to meet this challenge.

Imagine the Future

If we could create an ASIC-based solution for PETs, what would the future look like? We could make the public cloud secure, make it a trusted environment with massive acceleration and energy efficiency.

Technology that meets these needs will revolutionize access to data for the biggest global industries.

Example 1: Age Verification

Today, when you try and enter a club or buy alcohol, you present your driver’s license, freely giving the salesperson all your information – name, age, address, etc.

In the future, you will use a ZKP to verify that you are over the minimum age, without revealing any personal information.

Example 2: Reverse Time Migration

Today, the oil and gas industry uses a method called Reverse Time Migration (RTM) to produce high-resolution images of subsurface rock structures. It is a complex process that can help identify potential oil and gas reserves, and can eliminate the need for fracking. RTM is a computationally intensive process and requires a significant amount of computing power.

The data collected by RTM is so sensitive that companies would rather buy supercomputers to run RTM than risk exposing data to competitors if stored online.

In the future, this industry can run RTM on FHE data in the cloud on the most powerful servers available, saving them huge costs in infrastructure as well as providing better results.

Example 3: Cancer Research

Today, one of the factors that slows research into a cure for cancer is the lack of sufficient data. Researchers can only use data that patients consent to providing, such as with a clinical trial, medical records, or other voluntary platforms.

In the future, medical data insights can be gained using FHE on encrypted patient data. Imagine if researchers had access to every case of cancer in a whole country? Or even around the whole world? Access to a dataset of this size will significantly reduce the research time and dramatically advance cures for cancer.

And it’s not only for cancer, but for all diseases.

Wrap-up

Privacy is the biggest problem facing businesses and governments – and they are increasingly turning to privacy tech for the answer. But, privacy tech is only a viable solution when it’s accelerated by specialized hardware (ASICs), which provide unparalleled performance, protection, and energy efficiency, that can run cutting-edge technologies like FHE and ZKP in real-time to safeguard sensitive data.

Businesses and governments can only modernize their compute infrastructure, and break the boundaries of science, medicine, healthcare, and space exploration, when the cloud becomes a trusted environment.

This is the Holy Grail of Cloud Computing.

 

Specialized Compute Infrastructure, Part 1

Today’s compute needs are vastly different from even a decade ago. The amount of data is expanding exponentially, the type of data is shifting, and the sort of processing needed by enterprises and individuals is changing. There have been rapid software and hardware advances to keep up with these evolving demands, including fast paced advances in specialized compute. To appreciate the scale and velocity of recent advances, let’s detail the options for compute infrastructure until recently.

Commodity Hardware, OEMs, ODMs

For most of the history of modern computing, CPUs have powered all workloads. Multi-functional, interchangeable, and cross-compatible CPUs are designed and built for general purpose computation and perform lots of different compute operations relatively well. For example, a RAID (redundant array of independent disks) configures commodity hard disks for high-volume data storage at the server level.

CPUs power enterprise and consumer compute products, including on-premises data centers, cloud data centers, and PCs. Most companies that design chips use OEMs and ODMs to integrate their chips into products. OEMs and ODMs engage with different stages of these processes. Lenovo embed and integrate chips into servers, computers, and smart devices, whereas GigaIPC integrates chips into industrial motherboards, embedded systems, and smart display modules. This is still how most enterprise and consumer hardware is manufactured, though the chips used are developing fast.

The Evolution of the CPU

From the late 1980’s, the performance and transistor density of CPUs increased. Intel led the way with progressively faster processors and their own chip fabrication. This vertical integration was an advantage in Intel’s early days. Though there are still many foundries globally, the overwhelming majority produce chips using less advanced processes. It is increasingly difficult to keep pace with the market leading fabs, which has resulted in consolidation at the top-end of the foundry market.

The past decade has seen increasing competition in the CPU market, though AMD and Intel’s CPU market dominance is ongoing. The most significant development in CPU architecture came from Apple’s M1 chip, a custom ARM-based processor released in 2020. The M1 chip had marked improvements in performance, memory, and power efficiency, and replaced Apple’s dependence on other chip companies. Apple silicon has continued to increase performance and power-efficiency. By increasing chip performance, Apple increases device performance and maintains its competitive edge.

Apple’s M1 chip represents two important trends in CPU development. First, chip fabrication and chip design have segmented into increasingly separate industries, as both design tools and fabrication processes become more specialized and complex. Second, as a result, the past decade has seen hyperscale software and product companies buy up smaller hardware design companies, attempting to bring chip design in house. This move represents attempts to optimize CPUs for product-specific needs and is part of a much larger trend towards specialized compute hardware.

From Homogeneous to Heterogeneous Compute

Alongside rapid advancements in CPU architecture, another important trend is from homogeneous to heterogeneous compute. Until less than a decade ago, almost all computing systems were homogeneous – they did one type of compute and so lots of systems could be easily strung together to increase the overall compute power available for certain tasks. Commodity hardware provides important functionality for the workloads of many enterprises.

However, commodity hardware is insufficient to keep up with the workloads needed to run Artificial Intelligence, Machine Learning and Deep Learning applications. These applications require specialized hardware to provide the acceleration and functionality required. Even the most advanced CPUs are not sufficient to keep up.

The GPU

GPUs were originally designed to carry out the faster processing needed for gaming – when video interacts with users in real-time. Nvidia first developed a GPU to accelerate gaming graphics. Since, the applications of parallel computing enabled by GPUs are numerous, from scientific computing in the early 2000’s to AI, Machine Learning, and Deep Learning more recently. GPUs have become mainstream hardware solutions to accelerate different sorts of compute, including in applications like computer vision in self-driving cars.

In the past 15 years, there’s been a shift towards GPGPU (General Purpose GPU), where technologies such as CUDA or OpenCL use the parallel computing on the GPU to do general purpose computing. Like CPUs, GPUs are becoming ever more specialized and optimized for particular workloads.

ASICs

Unlike CPUs or GPUs, ASICs are chips with a narrow use case geared towards a single application. The increasing need for heterogeneous, accelerated compute has led to a proliferation of ASICs. For the period 2021-2024, the ASIC market is projected to grow by over $9B. Recent examples of ASICs deployed at scale include the Google Tensor Processing Unit (TPU), used to accelerate Machine Learning functions, and ASICs for orbital maneuvering used to move rockets to a new orbit at a different altitude used by SpaceX. ASICs are at the forefront of a range of new industries including networking, robotics, IoT, and blockchain.

Conclusion

Despite fast-paced development of silicon for compute, the majority of compute is still powered by CPUs. One size fits all computing, that worked well from the first chips in the Walkman until today, is no longer sufficient. Over just a few years there has been huge growth in compute hardware to optimize use cases CPUs cannot  address. The trend towards increasingly specialized compute is only accelerating.

Read part 2 for how specialized compute hardware is reshaping the future of compute infrastructure.

Intro to Privacy Tech, Part 2

[This is part 2 of a 3-part series on Privacy Tech. You can find part 1 and part 3 at these links.]

In a previous post we presented the current state of privacy tech – starting with how large an issue private data is to us and continuing to describe different types of privacy tech.

In this post, we’ll present information about moving data into the cloud.

Why Migrate to the Cloud?

To be competitive, businesses must lead with product innovation, value and performance. They must continuously modernize their operations, their services, and their compute infrastructure – including systems and workloads that support it all – but not all businesses can afford to build specialized on-premises data centers. They also cannot move to the cloud due to privacy issues – any encrypted data in the cloud must be downloaded, decrypted, processed, the results encrypted, and then stored back in the cloud. This lengthy process makes any real time data processing infeasible.

The CapEx costs (e.g., buying new servers) and OpEx costs (e.g., cooling and electricity) associated with on-premises data centers have pushed many businesses into the cloud. As more legacy data centers move their workloads to the cloud, it has become a $500B industry, growing at a rate of 21% CAGR year over year.

So why doesn’t every business move its data and processing into the cloud? There are 3 problems currently facing the future of modernized compute in the cloud.

Problem 1 – Cloud is Untrusted

Since cloud providers are 3rd parties, any data stored in the cloud must be stored encrypted. The data cannot be decrypted and processed there because malicious parties with access to that 3rd party could potentially access the decrypted data.

This limitation means that any company that needs to perform computations on private data most likely will not move to the cloud. For example, Visa maintains 4 of its own, private data centers worldwide (2 in the US, 1 in London, and 1 in Singapore), to process user transactions.

Problem 2 – Energy Costs Rise

The second problem facing cloud migration is the rising cost of energy. The cost is not just rising, but due to Corona, Russia invading Ukraine, supply chain issues and other factors, we are in an energy crisis.

For businesses to move to the cloud, they require the security of compute intensive PETs. This added processing comes at a cost, and data centers have no choice but to pass the rising energy costs to their customers (see the following image), which not only is a burden on the customers, but makes it harder for data centers to attract new customers.

Global energy demand 2015-2021 Figure 1: Global Energy Demand

The previous image shows the rise (more than 2x) in the cost of energy for data centers – energy use has become a significant factor in the cost of data centers.

The following image shows the magnitude of energy that data centers use. Data centers consume as much electricity as some countries[1].

Magnitude of Consumption Figure 2: Magnitude of Consumption

This high cost of energy has led to a shift from HPC to more energy efficient compute – companies are willing to sacrifice compute performance for cost savings.

Problem 3 – The Need for Speed

As compute becomes more complicated and involves more data, the acceleration required increases dramatically.

Acceleration Figure 3: Compute Acceleration

From the image you can see that a query, such as running SQL on a database, takes very little processing power, whereas Deep Learning (DL) training, such as for computing an Artificial Intelligence (AI) algorithm, takes much, much more.

GPUs can only achieve up to 200x acceleration over CPUs for these operations, but today’s privacy solutions need 100,000x acceleration. Advancements in software help but cannot provide enough acceleration to meet the privacy challenge.

How can we Make the Cloud Secure?

So far, we described how businesses need/want to move to the cloud, but can’t due to privacy issues. If there was a solution that implemented PETs in a fast, energy-efficient manner, it would open the door to migrating to the cloud.

Let’s take a step back and understand what PETs are.

The next evolution in privacy tech is called Privacy Enhancing Technologies or PETs. PETs enable us to perform operations on encrypted data without ever having to decrypt it.

While research on PETs has been going on since the late 70’s, they are only now beginning to become feasible for real life use cases. While there are many types of PETs, there are two types that provide significant privacy benefits for data centers: FHE and ZKP.

  1. Fully Homomorphic Encryption (FHE) is encryption that allows arbitrary computation on ciphertexts (as opposed to computing on plaintext). FHE enables an untrusted 3rd party to perform operations on data without revealing the input data or the internal state of the data during computation.
  2. Zero-Knowledge Proof (ZKP) is a method where one party (the prover) can prove to another party (the verifier) that they know a value x without conveying any information apart from the fact that they know the value x. ZKPs can be used for authentication between 2 parties, where no credentials – even hashed – need to be sent between the parties.

These types of PETs enable us to encrypt our data, store it in a public/cloud location, and perform operations on it without having to worry about it getting exposed.

While a setup using ZKPs and FHE is technically possible, because of the computation needs it is limited to cases where you don’t need the results in real time.

Wrap-up

Moving data into the cloud is necessary for businesses to succeed, but is faced with the challenges of trust, energy consumption, and performance. Once technology solves these issues through real time FHE/ZKP, it will enable a wave of migration to the cloud where sensitive data can be securely processed and stored.

 


[1] Data from article Data centers keep energy use steady despite big growth.

Intro to Privacy Tech, Part 1

[This is part 1 of a 3-part series on Privacy Tech. You can find part 2 and part3 at these links.]

What’s the Problem?

I was at the grocery store and as I paid by swiping my phone near the cash register, I marveled at how easy it was to pay. And then I thought about how complex this process really is – Within a couple of seconds my phone verified that I own it using biometric data and then confirmed that my credit card was authorized to be used by the phone and has sufficient credit to buy the items. During these few seconds a large amount of data transferred between me and my phone, my phone and my banking app, and my banking app and the multiple servers needed to verify funds. I assumed I was making a simple transaction, but in reality it involved many different parties who all shared my personal data. How can this process be safe and secure?

When you think about it, we live in a world of digital data. Whether it’s on your phone or laptop, or a business you interact with, including your healthcare and financial institutions, it’s all about the data. From the point of view of businesses – they are responsible for the security of their customer data. In addition, a business often needs to process large amounts of personal data to extract actionable insights from it. For example, a bank wants to determine which customers to target to give loans to. They examine their customer data to see which ones are most likely to take a loan and are least likely to default on that loan. As a result, they determine that a customer with a $75k income per year and no credit card debt can afford to make payments on a short-term loan of up to $15k per year.

Our data is valuable, and you wouldn’t want it to get into the wrong hands.

Ubiquitous Data

We experience potential privacy issues from the moment we wake up in the morning until we go to sleep, and even after we go to sleep. Here are some examples:

Just imagine if this sensitive data got into the wrong hands – how much damage could be done – to an individual? To businesses? To governments?

What is Privacy Tech?

We now know just how valuable our data is – so how do we keep it secure? The answer is privacy tech. Privacy tech is defined as any technology that is used to secure personal data. Some examples of it are using a VPN, encrypting data stored on a hard drive, and using TLS (Transport Layer Security, which is encryption on a network) when communicating between devices. You might not have thought about it, but if you’ve ever gone to a website and the URL starts with https instead of http, it’s using TLS to protect your data. In our grocery store case above, encryption keeps everything secure – data that is sent between parties is encrypted and data is stored encrypted. If any of it is stolen, it is almost impossible to decrypt in any reasonable amount of time.

Some tools are created with privacy in mind, known as privacy-first tools. Included in this category are the DuckDuckGo search engine, the Brave browser, and the Signal messaging tool. While not built solely for privacy, they are built with a privacy as an important aspect of the design.

Privacy Enhancing Technologies (PETs) are a subset of privacy tech. With PETs we can perform operations on encrypted data and receive actionable insights without ever decrypting the data. We’ll discuss these more in another post.

Wrap-up

The digital world is facing a massive privacy problem that is growing larger every day. Privacy tech must be embraced to help us protect our data.

Private data is the 21st century’s most valuable resource – it must be protected today and in a post-quantum era.

Grid Stabilization and Bitcoin Mining

Is bitcoin mining an environmental and social good? Energy stability and the ESG case for bitcoin mining

On 12 July 2022, Marathon Digital Holdings tweeted “#Bitcoin miners are a non-rival consumer of power – we can modulate demand as needed and help stabilize the grid.” But what does stabilizing the grid mean? And why is grid stabilization so important to understanding the ESG up-side of bitcoin mining?

Understanding Power Grids

To get to grips with one of the major environmental and social goods of bitcoin, we need to understand a little about how power grids operate. Most people take power grids for granted – when you flick a switch, electricity flows and your appliances simply work. In return for your monthly bill, you usually receive reliable electricity. But, for local utility companies to provide that reliable electricity, extra energy always needs to be available for the whole grid. This means that each grid requires reliable redundancies – extra power – that can be delivered quickly. Therefore, during a heatwave or a big sporting event, everyone can use lots of electricity at the same time without power cuts.
When power generation for the whole grid was from fossil fuels, grid operators could balance the demand and supply of energy relatively easily. As power grids integrate renewables, this becomes more difficult, as renewable energy sources are usually intermittent (wind blows inconsistently, it is not always sunny). Renewables make reliable energy generation more unpredictable and more complicated. Power grids need sources of energy that can be dispatched on demand to deal with electricity demands. This is often natural gas, as it can be brought online quickly and cheaply.

Nuclear, the ‘greenest’ energy source that can be a reliable backup, takes a long time to become available, dispatchable power. Renewable energy can be dispatchable power, but it is not reliable. If a grid relies on hydro-electric power, that power is wasted much of the time. It is possible to use batteries, and they should be part of the solution for long term grid stabilization. But even the world’s biggest battery would not be able to stabilize a large grid during a heatwave.

Bitcoin to the Rescue

Bitcoin mining is an integral part of long-term grid stabilization. Bitcoin mining can transform a nuclear plant or a dam into dispatchable generation by using the extra energy that the grid needs to have available but does not need to use. This is because enterprise bitcoin miners can power down within five minutes. No other technology can respond that rapidly and remain financially sustainable. Even data centers cannot power-down so quickly, as voluntarily interrupting computations is not a viable option. Bitcoin miners merely lose the single block they were computing.

Climate change events, global warming, inconsistent work patterns and COVID19 mean that modern energy supply and demand are continuously in flux. Grid operators increasingly deploy price signals. This allows industries and individual ‘smart’ devices to consume more power when the grid has excess energy, and power down when the grid needs the energy back. Some enterprise and individuals utilize battery technology to buy energy from the grid at night, when cheaper, and use the energy during the day. Despite these efforts, a more centralized solution is needed to prevent large-scale disruption of energy supply in extreme conditions.

The July 2022 Texas heatwave is a great example of bitcoin mining as a grid stabilizer. In July, all time energy usage over the Texas energy grid (ERCOT) reached an all-time high. What saved Texas from potentially life-threatening power cuts and energy disruption was bitcoin mining. Over 1000 megawatts, nearly 1% of the total grid capacity!) was released from mining operations back to the ERCOT grid at a moment’s notice.

This sort of cooperation is not only helpful for grids and bitcoin miners – it can also fuel renewable energy R&D. By monetizing all the energy generated by renewables, bitcoin mining makes investing in new renewables financially viable and enables investors to see quicker returns. Bitcoin mining can also help connect renewables to the grid by transforming the energy generated into dispatchable generation.

With the bear market that began in Q2 2022, profitable bitcoin mining is increasingly consolidated, which helps integrate bitcoin mining into power grids and transform it into a controllable load resource. As climate change makes grid stabilization more and more difficult to achieve, we will likely see a rise in states incentivizing enterprise bitcoin mining operations. In turn, bitcoin mining will power rapid, profitable renewable energy development and a more stable grid.

Mining in a Bear Market

We’re deep in a bear market. But this is not the first time we have seen a downturn in crypto markets.

In the year following December 2017 the value of Bitcoin dropped 84%. After a short period of recovery, with the onset of COVID19, Bitcoin lost over half of its value in two days in March 2020, falling from above $10,000 in February to below $4000 in March. Most recently in May 2021, the value of Bitcoin fell 53%. $1 trillion was wiped off the global crypto market in one week when Elon Musk refused to accept Bitcoin as payment for Tesla, China announced a more extensive crackdown on crypto mining, and the environmental impact of Bitcoin hit the headlines. The collapse of FTX and the knock-on effect on cryptocurrency trading platforms has added further instability to the market.

When we look at longer term developments in the Bitcoin mining market, there have been two important changes that impact the current downturn: 1) the institutionalization and consolidation of crypto mining by enterprise miners, 2) rapid advancements in mining hardware.

Bitcoin Resilience

Throughout price fluctuations, the fundamentals of Bitcoin (the architecture, the code, and the value proposition) never changed. The Bitcoin algorithm still uses a Proof of Work model and both mining and transactions flow through a distributed, decentralized ledger. There is a popular idea that the price of Bitcoin is cyclical. Regardless of market conditions, crypto regulation, or one-time events in the crypto space, as with any volatile asset, fluctuations are part of the long-term game.

Despite speculation that the price of Bitcoin might drop so low as to become unprofitable to mine, this seems unlikely. Not only has Bitcoin survived downturns before, but the game theory and economics of Bitcoin were designed so that Bitcoin could still be a viable asset even at lower prices. Bitcoin is still widely seen as the testcase for DeFi. As the first blockchain used for transferring and transacting value, Bitcoin is a high-stakes model for the future of finance.

Part of the reason Bitcoin is an important test case is that, as Mark Yusko said in July 2022, “every stock, every bond, every currency, every commodity, every piece of art, every collectable car, every house title, every marriage license, everything that can be titled or owned will eventually run on blockchains.” Ultimately “in the new blockchain era Bitcoin clearly is the base layer … it’s the most stable, most secure … it’s the most powerful computing network on the planet, bar none.”[1]

Agreeing with Yusko, many enterprise miners and Bitcoin investors have been sticking to their HODL strategy. Some enterprise mining companies need to sell their BTC to resolve treasury management issues, believing that the value of the BTC held is greater than the short-term gains made by selling. This strategy is only available to companies that are sufficiently well-established and consolidated to be able to keep mining BTC profitably at current price points while keeping their commitments to energy and operational costs. One of the key factors enabling established miners to keep going is the quality of their hardware.

Mining Hardware

Mining difficulty is tied to the total hash rate of the bitcoin network. The difficulty is a measure of how much computing power it takes to mine a BTC block – at higher difficulty, it takes more computing power to mine the same number of blocks, making the network more resource intensive. The sort of hardware that is online impacts the network difficulty. Many enterprise miners have taken older machines offline as running them is not profitable – the price of BTC is lower than the cost of running and maintaining the machines because they are inefficient. If these machines stay offline and are not replaced by newer, more advanced machines, difficulty will decrease because there will be less competition for each block. Conversely, if older machines (most notably the Bitmain Antminer S9) are put back online, network difficulty will stabilize. In that instance, more established, larger-scale miners will be able to deploy more hash rate at the new, lower difficulty, and mine more BTC.

Source: Blockchain.com

Miners can survive, and even take advantage of this bear market by:
1) Maximizing the efficiency of hardware. Using low-power, high-performance hardware allows miners to capitalize on bitcoin mined regardless of network difficulty.
2) Optimizing mining operations. Some enterprise miners have been negatively impacted by operations issues including getting hardware online, cable management and cooling, and regulatory holdups. Minimizing operations disruption is key to surviving this bear market.
3) Optimizing energy costs. Enterprise miners have found different, creative ways of reducing energy bills while continuing mining (including partnering with energy suppliers, using otherwise-wasted energy sources, and lobbying governments for tax incentives).

As energy becomes more expensive, and relocating mining rigs is difficult and costly, powerful, efficient hardware is the key to surviving, and even thriving, in this latest bear market. With the right hardware, enterprise miners can come out of this cycle stronger and ready for the next bull market, whenever it begins.

[1] https://mebfaber.com/2022/07/06/e427-yusko/