Ward pointed me at a recent article regarding algorithmic entities. It's the first academic paper i've read on the subject specifically. It's important in terms of raising the issue of the relationship between artificial intelligence, law, and smart contracts.
LoPucki, in contrast to the arguments by Shawn Bayern, argues that regulating algorithmic entities is currently impossible - largely due to charter competion by competing governements.
...charter competition generates systemic risk while impairing the political system’s ability to address the effects should that risk resolve unfavorably
I find most of the arguments true if not obvious, exploring the world of legal hacks available with existing corporate structures is great fun and not just for accountants. The paper by LoPucki would be basic research for any ICO.
However it is equally true as argued by Shawn Bayern, that law can regulate these issues. Regulating Algorithmic Entities is not impossible. The real problem is the lack of incentives do do so. When a lawyer says something is impossible, you would be advised to read her meaning as "you can't afford me to do that for you".
This is the same issue as faces any international governance issue, or any issue that faces a requirement for systemic change. There simply are no lucrative incentives for any party to change the system, the incentives are all aligned to being able to play the existing game better than the opposition.
This is particularly true with constitutional systems like the US Constitution, but it is also expensive for common law systems (my arguments for common law algorithms notwithstanding). In both cases we need Legal Refactoring and we need to finance this to a level competitive with the incentives to exploit loopholes (bugs) in the current state of the system. Bug hunting needs to be paid and paid well.
Rather than seeing smart contracts (and by implication algorithmic entities) as the problem here, we need to look to them as a tool that we can use in the design process we need to undertake in order to protect ourselves from the risk of autonomous entities.
Smart contracts that are also legally binding (Ricardian Contracts as per Ian Grigg's terminology), promise the ability to provide human governance to AI. Certainly we need some form of powerful tool to help us with such governance, and Ricardian contracts look like a promising candidate.
Rather than fear the power of these tools and concentrate on their potential for catastrophic harm, we should be looking to ways to harness this power to protect us from forces that are both imminent and unavoidable.
For a while now I have been arguing that smart contracts and decentralized autonomous organizations will have ten times the impact of the original internet, and are more dangerous than nuclear weapons. Verbally dramatising the problem this way should not stop us from looking to these tools to protect us from the systemic problems we face as AI combined with a complete lack of any form of effective global (read systematic) governance lead us into an existentially unstable future.
# See also - Liquid Law - Common Law Algorithm - Smart contracts - Ricardian Contract