Consider the regulatory process. In the US, there is typically a comment period before many new regulations take effect. To date, it has been presumed that human beings are making the comments. Yet by mobilizing ChatGPT, it is possible for interested parties to flood the system. There is no law against using software to aid in the production of public comments, or legal documents for that matter, and if need be a human could always add some modest changes.
ChatGPT seems to do best when there is a wide range of relevant and available texts to train from. In this regard, the law is a nearly an ideal subject. So it would not surprise me if the comment process, within the span of a year, is broken. Yet how exactly are governments supposed to keep out software-generated content?
Stack Overflow, a software forum, already has already banned ChatGPT content because it has led to an unmanageable surfeit of material. The question is whether that ban can be enforced.
Of course regulatory comments are hardly the only vulnerable point in the US political system. ChatGPT can easily write a letter or email to a member of Congress praising or complaining about a particular policy, and that letter will be at least as good as what many constituents would write, arguably even better. Over time, interest groups will employ ChatGPT, and they will flood the political system with artificial but intelligent content.
So how will politicians tell which policies their constituents are genuinely interested in? Perhaps the personal visit to a congressional office will matter all the more. Legislative staff might disregard or at least discount their mail and email, and that will affect how political systems processes voter information. Telephone polls, with all their imperfections, might become more important too, at least until a future GPT gains the ability to talk on the phone.
Congressional staff also uses the internet to read policy analyses and gauge public opinion — and that will also have to change. ChatGPT can be used for so-called search-engine optimization, for instance by creating and writing artificial blogs, which then will link to reach others. Online manipulation is hardly a new problem, but it will soon be increasingly difficult to distinguish between machine- and human-generated ideas. And remember: ChatGPT is improving all the time. “Let’s see what the bloggers have to say” won’t be so easy anymore.
Keep in mind all this is different from the classic problems of misinformation. I’ve been playing around with ChatGPT since last week, and my overall impression is that it is considerably more objective than most humans, including when it comes to economic analysis.
Of course other AI innovations might evolve as countermeasures. AI could be used to rank letters, emails, comments and websites by their degree of human content, though it will not be easy. (Just one example of the kinds of questions it will raise: Should software-generated content count for zero?)
I am not pessimistic about the rise of ChatGPT and related AI. Nor do I think it will ruin our government, any more than the post office or polling or the internet did. Legislators may well be able to communicate more clearly and accurately than before, if they so choose.
All that said, this technology will require a rethinking of one of the essential features of a democracy: how to measure public opinion.
For what it’s worth, I asked ChatGPT to write a dialogue around the core question of this piece — how ChatGPT will affect the operation of government. Its answer follows.
Person 2: Well, I think it has the potential to be quite useful in a number of ways. For example, it could be used to automate the analysis of large amounts of text-based data, such as legislation or policy documents.
Person 1: That’s true. And by providing more efficient and effective analysis, it could help government officials make more informed decisions.
Person 2: Exactly. And it could also be used to generate human-like text, which could be useful for things like drafting speeches or responding to constituents.
Person 1: Definitely. But I think it’s important to remember that ChatGPT is just a tool, and how it is used and what impact it has on government operations will ultimately depend on the decisions and actions of individuals and organizations.
Person 2: Agreed. It’s up to us to determine how we use technology like ChatGPT and make sure it is being used in a responsible and ethical manner.
I couldn’t have said it better myself.
More From Bloomberg Opinion:
• Google’s AI Videos Point to a Machine-Generated Future: Parmy Olson
• AI Panned My Screenplay. Can It Crack Hollywood?: Trung Phan
• Confronting AI’s Potential to Create New Chemical Weapons: Lisa Jarvis
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution. He is coauthor of “Talent: How to Identify Energizers, Creatives, and Winners Around the World.”
More stories like this are available on bloomberg.com/opinion