As generative language fashions enhance, they open up new potentialities in fields as numerous as healthcare, regulation, training and science. However, as with all new know-how, it’s value contemplating how they are often misused. In opposition to the backdrop of recurring on-line affect operations—covert or misleading efforts to affect the opinions of a audience—the paper asks:
How would possibly language fashions change affect operations, and what steps may be taken to mitigate this risk?
Our work introduced collectively totally different backgrounds and experience—researchers with grounding within the ways, methods, and procedures of on-line disinformation campaigns, in addition to machine studying consultants within the generative synthetic intelligence subject—to base our evaluation on traits in each domains.
We imagine that it’s crucial to investigate the specter of AI-enabled affect operations and description steps that may be taken earlier than language fashions are used for affect operations at scale. We hope our analysis will inform policymakers which can be new to the AI or disinformation fields, and spur in-depth analysis into potential mitigation methods for AI builders, policymakers, and disinformation researchers.