A common narrative pervades discussions of artificial intelligence regulation: that AI exists in a lawless “Wild West,” devoid of meaningful oversight or control. This perception, while widespread, fundamentally misunderstands how existing legal frameworks already intersect with AI governance. In my (soon to be submitted) Note for the Cardozo International & Comparative Law Review, I’m examining how AI technology actually fits within multiple established regulatory frameworks that, when evaluated together, provide a more comprehensive approach to governance than many realize.
The key to understanding AI regulation lies in recognizing its nature as both a dual-use technology and a data-driven phenomenon. Like nuclear technology or advanced semiconductors, AI serves both civilian and military purposes, placing it within established frameworks for controlling sensitive technologies. Simultaneously, AI’s dependence on data subjects it to existing data protection and privacy regulations. These overlapping frameworks create a more complex–and more comprehensive–regulatory landscape than the “Wild West” narrative suggests.
Recent developments in semiconductor export controls illustrate this complexity. When the Commerce Department restricted advanced AI chips in October 2023, they weren’t acting in a regulatory vacuum. Instead, they were applying established principles of dual-use technology control to address emerging capabilities. The restrictions’ impact on companies like Nvidia demonstrates how these controls directly affect private sector innovation–a crucial consideration given that civilian companies now lead AI advancement.
The international dimension adds another layer of complexity. AI development depends on global supply chains and cross-border collaboration. The semiconductor restrictions required coordination with allies like Japan and the Netherlands, highlighting how effective AI governance demands international cooperation. This mirrors historical patterns in regulating dual-use technologies, where unilateral controls often proved insufficient without multilateral support.
Environmental considerations further complicate the picture. AI development requires tremendous computational resources, making it dependent on physical infrastructure and environmental inputs. The energy requirements for training large language models, the rare earth elements needed for hardware components, and the water consumption of data centers all place AI development within existing frameworks for environmental regulation and resource management.
The challenge isn’t that AI lacks regulation–it’s that understanding its regulation requires examining multiple intersecting frameworks:
First, dual-use technology controls provide mechanisms for managing sensitive capabilities. These frameworks, developed through distinct historical periods, offer established approaches for balancing innovation with security concerns.
Second, data protection regulations address issues ranging from privacy protection to preventing electoral interference through deepfakes and misinformation. These frameworks become increasingly relevant as AI systems process larger amounts of sensitive data.
Third, environmental regulations affect AI development through controls on resource usage and infrastructure development. The physical requirements of AI systems place them squarely within existing frameworks for managing industrial resource consumption.
My ongoing research examines how these frameworks interact and where they might need adaptation to address AI’s unique characteristics. While traditional regulatory approaches may struggle with AI’s rapid development cycle, they provide essential foundations for effective governance.
The path forward isn’t about creating entirely new regulatory frameworks–it’s about understanding and adapting existing mechanisms to address emerging challenges. This requires careful consideration of how different regulatory approaches interact, how they affect private sector innovation, and how they can be coordinated internationally.
As I continue developing this analysis for my note, I’m increasingly convinced that effective AI governance lies not in creating new regulations from scratch, but in understanding and enhancing the sophisticated frameworks we already have. The challenge isn’t a lack of regulation–it’s understanding how existing regulations apply to this transformative technology.
This post previews ongoing research for my Note to be submitted to the Cardozo International & Comparative Law Review. The final analysis, due in March, will provide a comprehensive examination of how existing legal frameworks can inform effective AI governance.



Leave a comment