As we think through DSNP governance, we are painfully aware that policy decisions look differently in various parts of the stack. Wil and I took a few minutes to jot down one example of how this shows up in our work, and we’ve posted it here so people can start to see how we’re looking at the different pieces.
When somebody posts content, accountability for that content and data sits in a few different places:
- Author
- Recording Services
- Storage, Hosting, and Delivery Services
The author is whoever originates the content. At a human level, it could mean the person or entity who controls the account as well as whoever contributes to creating the content. At a technical level, this is the account that is marked as the “fromId” in the Announcement.
Recording Services are the apps and services that take content and deliver it to DSNP for inclusion in the blockchain. For most posts, this will be whatever app was used to post the content. As the ecosystem matures, there might arise intermediate steps that process content in ways that might require some accountability mechanisms.
Storage, Hosting, and Delivery services all have some responsibility for content after it hits the chain. This category includes the apps that show DSNP content to users. It also includes the services that store and host data, but also the services that users rely on to find data. Content indexers, scorers, and filters all fall into this category.
Accountability Is Not Uniform
Actors in this system do not all play the same role, and so we must approach accountability for them in varying ways. Some of the infrastructure described above is quite distant from the authors of content. Still, it might be useful to be able to identify places in the ecosystem where bad actors tend to congregate. At some point, it is appropriate to take steps to reign in even deep-infrastructure service providers that are enabling disproportionately high amounts of harm.
Accountability mechanisms will similarly vary. In many cases, widespread ignoring of messages from a specific user account will be an effective and reasonable response to abusive behavior. When dealing with larger pieces of infrastructure that sit further from users, though, it will often make more sense to subject messages to more scrutiny (perhaps via reputation-scoring systems) than to real-time blackhole large populations of users.
Accountability breadth will also vary. Ecosystem-wide bans should be relatively rare. The transparency DSNP provides and reputation-scoring mechanisms will improve the ability of other actors within the system to make meaningful choices about thresholds for triggering accountability measures. That decision can be made with respect to each service’s ethical principles, user needs, and concerns for their own reputation.