This post is for Week 5: Closing regulatory gaps through non-proliferation of the BlueDot Impact AI Safety Fundamentals: Governance Course. Each week of the course comprises some readings, a short essay writing task, and a 2-hour group discussion. This post is part of a series that I'm publishing to show my work, document my current thinking on the topics, and better reflect on the group discussion on the topic as I explain here.
Essay task
This week we were tasked with defining and evaluating a policy tool supporting one of the strategic directions from Holden Karnofsky's Racing through a minefield: the AI deployment problem blog post on Cold Takes. I selected
Global monitoring: identify and prevent “incautious” projects racing toward deploying dangerous AI systems.
Somewhat more concretely, what might this look like?
- Creating devices and mechanisms by which the usage of data centres containing cutting-edge AI chips can be monitored
- In a similar vein to nuclear enrichment monitoring devices
- Devices are tamper-proof, or tamper-detectable
- Could monitor things like:
- HVAC cooling load
- Power draw
- Water/coolant consumption
- Compute processing time
- Net consumption per unit of compute can be benchmarked for each data centre facility
- Enforce a know-your-customer (KYC) policy for the operators of the data centres which results in customer compute usage being tracked
- Reconcile the total usage from the monitoring devices/mechanisms against the KYC compute monitoring as a verification mechanism
- This helps to incentives/enforce stricter KYC compute monitoring as gaps in customer usage could be detected
- Customer usage is reported to an international governing body and is made available to either the public or a select group of 'cautious' actors
- Maybe only usage over a certain threshold is reported
- Maybe the T&C of using AI facilities is that you agree to your usage being reported
Explain concrete reasons this may be feasible or infeasible to implement. You may wish to focus on a particular jurisdiction of your choice.
- Feasible
- Similar monitoring frameworks have been created for nuclear weapons to allow for
- KYC is an existing framework from the financial sector that has helped prevent fraud and bad actors (this is an assumption I would want to verify)
- It is likely relatively trivial to correlate the facility monitoring metrics (cooling load, power draw, coolant, etc.) to the compute load in a facility.
- Given the supply chain for cutting-edge chips involves a limited set of actors, it could be relatively straightforward for sellers of chips to require data centre buyers to have KYC + facility monitoring in place
- This requirement could come from governments in a similar vein to the way financial KYC is required to allow a company to operate in a field.
- US companies – Amazon Web Services, Microsoft Azure, and Google Cloud – account for 65% of the market share of cloud computing providers (reference). Therefore, regulation on monitoring enforced by the US government would affect a significant fraction of the cloud computing market.
- This requirement could come from governments in a similar vein to the way financial KYC is required to allow a company to operate in a field.
- The supply chain is more concentrated upstream – for example at the AI chip fabrication or semiconductor manufacturing equipment stage – which could be even higher leverage for 'trickle down' governance via requiring downstream buyers to adhere to KYC + facility monitoring
- Infeasible
- It could be hard to get data centre operators and companies to buy into KYC + shared reporting metrics
- This is further complicated by data centres being distributed internationally across many jurisdictions (I assume)
- This would require a critical mass of AI-capable data centres to agree to monitoring
- It might be that not all data centres need to be monitored and that if enough are monitored then it would be possible to infer what is happening at others somehow. Eg if a company brings out a new AI product with no/low reported usage, then it's likely they used an unmonitored data centre for training. This still seems far from an ideal scenario though, as it doesn't allow for a prior monitoring.
- It could be hard to get data centre operators and companies to buy into KYC + shared reporting metrics
What are the potential costs to the approach, or why might it be harmful?
- It potentially sets up an incentive structure for "bad" or "incautious" actors to turn to, or create, a black market
- Although there could be upsides to this – such as it being easier to shut down illegal data centres
- This mechanism starts to break down if/when cutting-edge chips are no longer required by "incautious" actors to train dangerous models
How well does it address the risks it addresses? E.g. there is a significant difference between mitigating a little, mitigating a lot and eliminating a risk - which may inform whether this measure is sufficient for handling this risk.
- It doesn't eliminate the risk that a bad/incautious actor accesses a black market for AI training
- However, it seems to mitigate a lot of the risk by making it impossible for non-nefarious but "incautious" actors – ie. actors that won't do illegal things but are nonetheless incautious – to use large quantities of compute without being noticed.
- Is "being noticed" sufficient to reduce the risk of "incautious" actors racing towards deploying dangerous AI systems? No, certainly not, but it provides information for other "cautious" actors to act on to limit the actions of "incautious" actors
- So, in summary, facility and KYC compute monitoring and reconciliation doesn't prevent racing in and of itself, however, it would be a necessary step to enable the prevention of projects racing to deploy dangerous AI systems.
Group discussion
The AI race between the US and China
The focus of one of the readings this week, The State of AI in Different Countries — An Overview by Lizka Vaintrob, was the relative state of AI between the US and China. The article makes the case for the US, and its allies, maintaining an advantage in the ongoing development of AI capabilities over China. Key arguments for this included higher investment, access to top AI talent, access to the semiconductor supply chain, and censorship in China.
For me, the most interesting takeaway from the article was that China may not be fertile ground for ongoing AI innovation. The political restrictions placed on the development and deployment of AI systems may prove too limiting for innovation relative to the conditions present in the US and its allies. This, combined with the other differences between the financial, scientific, and talent landscapes, could mean that China cannot compete with US AI innovation in the long run.
I find this argument, and the evidence presented in Vaintrob's article, compelling and have downgraded the level of concern I have about China when it comes to cutting-edge AI and upgraded the concern I have about the US – although I still feel that there are risks from non-cutting-edge AI when it comes to China. When thinking about how cutting-edge AI models might exacerbate long-running catastrophic risks, this week's content has lead me to think we should be more worried about AI development in the US. The US seems much more likely to maintain the lead on AI development and efforts to reduce the risks from cutting-edge AI models consequently seem better aimed at the US over China, on the margin.
–
On a somewhat related point, this week's discussion surfaced the idea that the US export controls placed on China make it possible for the US to discuss self-regulation. The reasoning is that while the "US people" – policymakers, interest groups, etc. – feel that there is competitive pressure from China, they will be uninterested in contemplating slowing down their development. So export controls on China assuage the fear of China taking the lead and enable self-regulation – which could result in a slowing of development – to enter the Overton window.
It's an interesting idea about which I'd be curious to learn more – specifically how representative of the truth is it? Abstracting the model of export controls as instrumentally valuable for enabling discourse on US-centric regulation is interesting to me. What other instrumentally valuable governance mechanisms are out there? And how could they be a means to the end of reducing catastrophic risk?
Some miscellaneous thoughts/notes from the discussion
- A thought experiment: if you feel that the US placing export controls on China is self-serving to US interests, how would you feel if an international body decided to place export controls on China?
- US antitrust laws prohibit the private sharing of information between companies. This means AI companies can't share safety information secretly between each other. As a result, they publish public articles to share safety best practices, etc.
- As a hypothetical, would relaxing antitrust laws such that AI companies can secretly share safety information be net beneficial for reducing risk from AI?
- I "weakly disagreed" based on thinking it better for safety techniques and best practices to be publically accessible by default, instead of secretly shared – assuming that sharing safety techniques doesn't reveal information that would enable others to develop capabilities
- As a hypothetical, would relaxing antitrust laws such that AI companies can secretly share safety information be net beneficial for reducing risk from AI?
- A governance system in which it is possible to view which GPUs are being used by which companies to train models could be an information security vulnerability. It would enable bad actors to target those GPUs for cyber attacks.