Artificial intelligence (AI) is shaping the way we live, work and connect with the world. From chatbots to image generators: AI is transforming our online experiences. But this change raises serious questions: Who controls the technology behind these AI systems? And how can we ensure that everyone – and not just traditional big tech – gets a fair chance to access and contribute to this powerful tool?
To explore these crucial issues, Mozilla commissioned two studies that delve deeply into the challenges surrounding AI access and competition: “External Researcher Access to Closed Foundation Models” (commissioned by data rights agency AWO) and “Stopping Big Tech From Becoming Big AI (commissioned by the Open Markets Institute). These reports show how AI is built, who is in control, and what changes need to happen to ensure a fair and open AI ecosystem.
Why access for researchers is important
‘External Researcher Access to Closed Foundation Models’ (written by Esme Harrington and Dr. Mathias Vermeulen from AWO) addresses an urgent problem: independent researchers need better conditions for accessing and studying the AI models that large companies have developed. Fundamental models – the core technology behind many AI applications – are mainly controlled by a few big players who decide who gets to study or use them.
What’s the problem with access?
- Limited access: Companies like OpenAI, Google and others are the gatekeepers. They often restrict access to researchers whose work aligns with their priorities, meaning independent research in the public interest can be left out in the cold.
- High costs: Even when access is granted, it often comes with a hefty price tag that smaller or less funded teams cannot afford.
- Lack of transparency: These companies don’t always share how their models are updated or moderated, making it nearly impossible for researchers to replicate studies or fully understand the technology.
- Legal risks: When researchers try to scrutinize these models, they sometimes face legal threats if their work reveals flaws or vulnerabilities in the AI systems.
The research suggests that companies should provide more affordable and transparent access to improve AI research. Furthermore, governments should provide legal protection to researchers, especially when they act in the public interest by investigating potential risks.
Access for external researchers to closed foundation models
Read the paper
AI Competition: Is Big Tech Stifling Innovation?
The second study (authored by Max von Thun and Daniel Hanley of the Open Markets Institute) takes a deeper look at AI’s competitive landscape. Currently, a few tech giants such as Microsoft, Google, Amazon, Meta and Apple are building extensive ecosystems that allow them to dominate different parts of the AI value chain. And a handful of companies control most of the key resources needed to develop advanced AI, including computing power, data and cloud infrastructure. The result? Smaller companies and independent innovators are being squeezed out of the race from the start.
What’s happening in the AI market?
- Market concentration: A small number of companies have a stranglehold on key AI inputs and distribution. They control the data, computing power and infrastructure that everyone needs to develop AI.
- Anti-competitive ties: These big players buy or make deals with smaller AI startups that often bypass traditional competition controls. This can prevent these smaller companies from challenging big tech and prevent others from competing on a level playing field.
- Gatekeeper power: Big Tech’s control over critical infrastructure – such as cloud services and app stores – allows them to impose unfair terms on smaller competitors. They may charge higher fees or prioritize their products over others.
The research calls for strong action from governments and regulators to prevent the same market concentration that we have seen in digital markets over the past twenty years. It’s about creating a level playing field where smaller companies can compete, innovate and offer consumers more choice. This means enforcing rules to prevent tech giants from using their platforms to give their AI products an unfair advantage. It also ensures that crucial resources such as computing power and data are more accessible to everyone, not just big tech.
Preventing Big Tech from becoming Big AI
Read the paper
Why this matters
AI has the potential to bring significant benefits to society, but only if it is developed in a way that is open, fair and responsible. Mozilla believes that a few powerful companies should not determine the future of AI. Instead, we need a diverse and vibrant ecosystem where public interest research thrives and competition drives innovation and choice – including from open source, public, non-profit and private actors.
The findings highlight the need for change. Improving access to basic models for researchers and addressing the growing concentration of power in AI can ensure that AI evolves in a way that benefits us all – not just the tech giants.
Mozilla aims to advocate for a more transparent and competitive AI landscape; this research is an essential step toward realizing that vision.