November 17, 2023

Transparency in AI Companies: Stanford Study

ai news

In a world increasingly shaped by artificial intelligence (AI), the lack of transparency from major foundation model developers has reached alarming levels, according to a groundbreaking new study. The study, aptly titled “The Foundation Model Transparency Index,” has exposed the hidden practices and opaque behaviors of influential foundation model developers.

About the FMTI study and key findings

The Foundation Model Transparency Index, developed by a team of researchers from Stanford University, Massachusetts Institute of Technology, and Princeton University, assessed the transparency of 10 major foundation model developers, including industry giants like OpenAI, Google, and Meta. The study examined 100 fine-grained indicators across three domains: upstream resources, model characteristics, and downstream use. Shockingly, the results revealed a significant lack of AI transparency across all dimensions.

Key findings include:

  • Overall transparency scores: The highest overall score was 54 out of 100, while the mean overall score was only 37, indicating a widespread lack of transparency;

  • Uneven transparency: While some companies scored well above the mean, others fell well below it, highlighting a significant disparity. This underlines the need for standardized transparency practices across the industry;

  • Opaque upstream resources: Transparency scores were particularly low in the upstream domain, including data, data labor, and compute resources. This is indicative of a lack of disclosure about crucial aspects of their AI development process;

  • Limited disclosure of downstream impact: The study found virtually no transparency regarding the downstream impact of foundation models. Companies provided little to no information about the affected market sectors, individuals, or geographies, and failed to disclose usage reports or mechanisms for addressing potential harm caused by their models;

  • Open vs. Closed developers: The study revealed that open developers, who release model weights and potential data, demonstrated higher levels of transparency compared to closed developers. However, even open developers had room for improvement in downstream transparency.

Kevin K., one of the study’s authors, summarizes the findings, “No foundation model developer gets a passing score on transparency. None of the 10 companies score more than 60% on the Foundation Model Transparency Index, showing that top companies do not share nearly enough information about how they develop and use foundation models."

This comprehensive analysis clearly demonstrates that the AI industry still has an extremely long way to go when it comes to transparency. This opacity around the downstream impact, as a case in point, leaves users and the public in the dark about the societal consequences and potential harms caused by these AI systems.

A recommended path

The study also presents several recommendations for the path ahead in improving AI transparency in the foundation model ecosystem. It makes these recommendations to the key decision makers - Developers, Deployers, and Policy Makers.

Key suggestions include:

  • Foundation model developers should improve transparency by learning from their competitors' practices. They can assess the indicators where they lack transparency and consult the practices of other developers who are already achieving transparency in those areas. A crucial aspect to note here is that 82 of the 100 transparency indicators have already been satisfied by at least one major developer;

  • Foundation model deployers should push for greater transparency from developers. They have the leverage to acquire the necessary transparency when making decisions to deploy a developer’s model, and they should use this leverage to ensure transparency in the downstream use of foundation models;

  • Policymakers should prioritize transparency with precision. They should make transparency a top priority in legislative proposals and regulatory enforcement related to foundation models. Policymakers need to understand the current level of transparency and intervene in areas where transparency is most urgently needed.

In conclusion, the Foundation Model Transparency Index highlights the need for improved transparency in the AI ecosystem. The study identifies areas where transparency is lacking and provides recommendations for developers, deployers, and policymakers to drive progress in this regard. Transparency is essential for public accountability, responsible innovation, and effective governance of digital technologies.

At Lumenova AI, we welcome this study as a wake-up call to the industry. As the AI foundation model landscape continues to evolve, we aim to assist individuals and organizations in navigating the complexities of transparency. With our expertise, we can help people wade through the intricacies of the AI ecosystem responsibly and make informed decisions.

Join the conversation on Twitter and LinkedIn!

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo