|Antwort auf:||Was ich noch sagen wollte #255 von Cerberus|
Google hat eine bekannte Mitarbeiterin und AI-Forscherin entlassen, nachdem sie ein Paper über Probleme moderner AI-Systeme zur Verarbeitung natürlicher Sprache veröffentlichen wollte, das Google ohne Angabe von Gründen zurückgehalten hat. Mittlerweile haben fast 2.000 Google-Mitarbeiter einen Protestbrief gegen diese Entscheidung unterschrieben:
A paper co-authored by former Google AI ethicist Timnit Gebru raised some potentially thorny questions for Google about whether AI language models may be too big, and whether tech companies are doing enough to reduce potential risks. (...)
Gebru and her team submitted their paper, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” for a research conference. She said in a series of tweets on Wednesday that following an internal review, she was asked to retract the paper or remove Google employees’ names from it. She says she asked Google for conditions for taking her name off the paper, and if they couldn’t meet the conditions they could 'work on a last (employment) date.' Gebru says she then received an email from Google VP Jeff Dean informing her they were 'accepting her resignation effective immediately.' (...)
Since news of her termination became public, thousands of supporters, including more than 1,500 Google employees have signed a letter of protest. (...) The petitioners are demanding that Dean and others 'who were involved with the decision to censor Dr. Gebru’s paper meet with the Ethical AI team to explain the process by which the paper was unilaterally rejected by leadership.'
Das MIT Technology Review hat das Paper gelesen und stellt Google ein schlechtes Zeugnis im Umgang mit ihrer Mitarbeiterin - und wissenschaftlicher Arbeit grundsätzlich - aus:
MIT Technology Review obtained a copy of the research paper from one of the co-authors, Emily M. Bender, a professor of computational linguistics at the University of Washington. (...) It gives some insight into the questions Gebru and her colleagues were raising about AI that might be causing Google concern.
These have grown increasingly popular—and increasingly large—in the last three years. They are now extraordinarily good, under the right conditions, at producing what looks like convincing, meaningful new text—and sometimes at estimating meaning from language. But, says the introduction to the paper, 'we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.' (...)
Google’s actions could create 'a chilling effect' on future AI ethics research. Many of the top experts in AI ethics work at large tech companies because that is where the money is. 'That has been beneficial in many ways, but we end up with an ecosystem that maybe has incentives that are not the very best ones for the progress of science for the world.'
Das Thema hat es innerhalb weniger Tage in die weltweite Presse gebracht, wo Google ordentlich Feuer bekommt:
Da kann man nur hoffen, dass Google in Zukunft anders mit solchen Ergebnissen umgeht und sich in vergleichbaren Situationen wieder an ihren (mittlerweile über Bord geworfenen) Verhaltenskodex "Don't Be Evil" erinnert. Der Konzern hat einfach zu viel Macht...
|< antworten >|