Artificial intelligence is quickly becoming the most transformative technology of our time. But as it surpasses other groundbreaking innovations such as cryptocurrency and blockchain in the public consciousness, it is also gaining an unenviable reputation. Despite its astonishing progress and enormous promise, AI is now vying for the dubious honor of being one of the most mistrusted industries.
From fears of AI-generated deepfakes spreading political disinformation to tragic stories of chatbots linked to self-harm, the public narrative around AI appears increasingly negative. Whether it’s headlines about a fabricated Taylor Swift endorsement misleading fans or an AI chatbot allegedly driving a teen to commit suicide, these incidents are fueling fears about a technology that could spiral out of control.
The AI industry is facing an image problem that mirrors the challenges the cryptocurrency sector previously faced. To navigate this critical moment, AI innovators must learn from crypto’s missteps in communications and public engagement to build trust and ensure responsible innovation.
Artificial intelligence and cryptocurrency each have the potential to redefine industries, economies and personal experiences. But they share a common challenge: significant reputational problems arising from public fear, abuse and regulatory scrutiny. As someone who has spent years working with tech startups in both AI and blockchain, I have witnessed firsthand how miscommunication and a lack of proactive engagement can hold back even the most groundbreaking innovations.
The crypto industry in particular has had a tumultuous journey – from the heights of speculative excitement to the depths of public disillusionment and strict regulation. Growing concerns about AI’s impact on society, illustrated by high-profile cases of abuse, offer a timely parallel. AI startups may choose to repeat the mistakes of cryptocurrency or take a path that promotes trust and emphasizes ethical responsibility.
Shared reputation challenges
The early days of cryptocurrency were marred by stories of fraud, volatility and associations with illegal activities. High-profile hacks and scams overshadowed the transformative potential of blockchain technology in areas such as secure data sharing and financial inclusion.
AI now faces its own set of challenges. On the eve of the US election, fears of AI-generated deepfakes have raised alarms about the erosion of truth, and the manipulation of public opinion and concerns about AI chatbot personas have raised concerns about the ethical design and deployment of AI systems increase. There are concerns that AI could unintentionally cause harm if not properly regulated and monitored.
Regulators and lawmakers are taking notice. Discussions about implementing guidelines and laws to govern AI technologies are gaining momentum worldwide. Without proactive engagement and effective communication, AI companies risk being hampered by regulations that could stifle innovation and delay the deployment of useful technologies.
Unlike the crypto industry, governments are eager to harness AI’s potential for national security and economic competitiveness. The White House this month issued a memo highlighting the importance of AI to national security and directing federal agencies to adopt AI technologies while prioritizing safety, security and reliability. This government appetite for AI advancements presents the industry with a unique opportunity to effectively manage its reputation and work with policymakers to accelerate responsible adoption, unlike the crypto sector, which faced years of with often hostile resistance.
A crucial mistake that many crypto projects made was overpromising and underdelivering. Grand visions were outlined, but tangible products or services often failed to materialize. This gap between expectations and reality led to public disillusionment and increased scrutiny by regulators.
AI startups should avoid this pitfall by focusing on practical, hands-on applications that prioritize safety and ethical considerations. For example, developers should implement robust safeguards in AI systems to prevent misuse and unintended consequences. By demonstrating a commitment to user welfare and ethical standards, companies can build trust and credibility.
Effective communication, ethical responsibility
Communication is more than marketing; it’s about building relationships and promoting understanding. The crypto industry often struggled with opaque reporting and a lack of transparency, which fueled distrust.
AI startups need to take a different approach:
- Transparency: Be open about how AI systems work, the data they use and their limitations. Transparency demystifies the technology and removes the fear that comes from the ‘black box’ nature of some AI models.
- Ethical guidelines: Develop and follow strict ethical guidelines regarding the use and deployment of AI. Publicly sharing these guidelines can build trust and set industry standards.
- Proactive involvement: Don’t wait for regulations to come in. Work with policymakers, the public and other stakeholders to shape sensible regulations that protect society without stifling innovation.
Building bridges with stakeholders
Crypto’s hostile attitude towards regulators and traditional institutions often backfired. AI startups, on the other hand, should strive for collaboration:
- Working with supervisors: Maintain open lines of communication with legislators to inform them about the technology and its implications. Providing expertise can help create a balanced policy that protects users while allowing innovation to flourish.
- Inform the public: Invest in public education initiatives to improve understanding of AI. This could include community workshops, informational content or partnerships with educational institutions.
- Collaborate with experts: Work with ethicists, psychologists, and other professionals to ensure AI systems are designed with a holistic understanding of human behavior and societal impact.
The Convergence of AI and Crypto
Interestingly, AI and crypto are not just parallel technologies, but are increasingly intersecting. Blockchain technology can provide solutions to some of the challenges of AI:
- Data integrity and privacy: Blockchain can improve data security and give users control over their personal data, addressing privacy concerns inherent in AI data collection practices.
- Authenticity verification: Blockchain can help verify the authenticity of digital content and protect against deepfakes by creating immutable records of original media.
- Decentralized computing power: Decentralized networks can distribute computing resources across multiple nodes, enabling the training of AI models on massive data sets without relying on centralized data centers. This approach reduces costs, improves efficiency, and democratizes access to AI development by enabling a broader range of participants to contribute processing power and collaborate on AI innovations.
Learning from the experience of Crypto
Crypto’s turbulent history provides a roadmap of pitfalls to avoid:
- Avoid complacency: Recognize that public trust must be earned and maintained through consistent action and communication.
- Tackling abuse proactively: Just as crypto faced issues with illegal activity, AI must address the potential for abuse head-on and develop safeguards against malicious applications such as deepfakes or unethical chatbots.
- Show social responsibility: Companies that demonstrate a commitment to social good can differentiate themselves and build stronger relationships with both the public and regulators.
Addressing impact on society
Both AI and crypto have the potential to disrupt societal norms and institutions. This disruption can lead to resistance unless it is handled carefully:
- Limit negative consequences: Actively work to reduce potential harm, such as risks to mental health or the erosion of trust in information.
- Emphasize positive contributions: Highlight how AI can improve lives, from improving healthcare outcomes to enabling new forms of communication and education.
- Encourage inclusivity: Ensure AI technologies are developed with diverse perspectives to fairly serve a wide range of communities.
Conclusion
The drumbeat of negative headlines about AI underlines the urgent need for the industry to address its image problem. By learning from the crypto industry’s missteps—overpromising, poor communication, and hostile stances—AI companies can meet these challenges more effectively.
Effective communication, ethical responsibility, and proactive engagement are not just strategies for success; they are necessities. The goal is not only technological progress, but also integrating innovation into society in a way that is accepted and trusted.
AI has the opportunity to write a different story – a story where technology and humanity move forward together, responsibly and ethically. By addressing fears, demonstrating real value, and committing to ethical practices, AI startups can ensure they are part of the solution, not part of the problem.
Saul Hudson is a managing partner at Corner42a strategic communications agency for fast-growing startups in the Web3, AI and emerging technology industries.