AI Chatbots Got Big—and Their Ethical Red Flags Got Bigger

Posted on:
Key Points

Thats an extreme case of what we cant afford to let happen, Solaiman says.. Solaimanslatest research at Hugging Face found that major tech companies have taken an increasingly closed approach to the generative models they released from 2018 to 2022..

Companies that guard their breakthroughs as trade secrets can also make the forefront of AI less accessible for marginalized researchers with few resources, Solaiman says.. As more money gets shoveled into large language models, closed releases are reversing the trend seen throughout the history of the field of natural language processing..

We have increasingly little knowledge about what database systems were trained on or how they were evaluated, especially for the most powerful systems being released as products, says Alex Tamkin, a Stanford University PhD student whose work focuses on large language models...

Protecting human rights means moving past conversations about whats ethical and into conversations about whats legal, she says.. Hickok and Hanna of DAIR are both watching the European Unionfinalize its AI Act this year to see how it treats models that generate text and imagery..

Some things need to be mandated because we have seen over and over again that if not mandated, these companies continue to break things and continue to push for profit over rights, and profit over communities, Hickok says.. While policy gets hashed out in Brussels, the stakes remain high..