Why Timnit Gebru wants AI Giants to think small

Any language model that claims to be all things to all people will almost certainly be less effective than one that is smaller but was purpose-built for a given task or community. 

One example of this is in machine translation, where smaller models, trained specifically on a given language, often outperform gigantic models that purport to translate hundreds of languages at once, but wind up doing a shoddy job with nondominant languages. 

“When you know your context and your population and you curate your datasets for that reason, you build small,”

Combating Harmful Hype in Natural Language Processing

“Building small,” stands to benefit not just the end users of AI technology, but also the much broader spectrum of companies working on these tools in and for communities around the world. 

That’s the harm behind the hype: When a select few companies in Silicon Valley promise more than their technology can deliver it makes it even harder for smaller companies to compete. But it doesn’t have to be that way.

Source Fast Company

Leave a Reply

Your email address will not be published. Required fields are marked *