大型科技公司是否拥有合适的人才来赢得我们对其人工智能创造的信心?
Does Big Tech Have The Right Talent To Win Our Confidence With Its AI Creation?

原始链接: https://www.zerohedge.com/technology/does-big-tech-have-right-talent-win-our-confidence-its-ai-creation

大型科技公司的生成式人工智能:克服偏见和多样性问题 由于谷歌 Gemini 等系统的输出存在偏见,大纪元的香农·爱德华兹 (Shannon Edwards) 对大型科技公司生产值得信赖的人工智能 (AI) 的能力表示担忧。 这些问题包括对人物的错误描述和有争议的比较,这些问题通常归因于“过度纠正”。 虽然有些人可能将这些事件视为错误,但硅谷专家认为,由于制定人工智能规则的团队中观点和经验代表性不足,这些事件可能代表了预期的功能。 此外,定义“负责任”或“道德”人工智能的责任仍在不断演变,这使得大型科技公司重新评估其人才库并适应更加多样化和包容性的招聘流程至关重要。 尽管在负责任的人工智能实践方面进行了大量投资和声明,但像谷歌这样的主要参与者仍然主要是年轻人,其中大部分是男性,并受到特定政治意识形态的主导——这引发了关于谁的观点决定突破性技术方向的问题。 科技公司许多既定的招聘做法优先考虑精英教育背景和严格的筛选方法。 尽管最近在更具包容性的招聘方面取得了进展,但来自非主流政治派别的高级专业人士和个人的机会仍然很少。 缺乏多样化的代表性可能会阻碍真正的创新,并进一步加剧关于人工智能应用客观性和真实性的争论。 最终,采用更广泛的想法和经验可能对于充分利用先进技术的潜力同时解决关键的道德问题至关重要。

相关文章

原文

Authored by Shannon Edwards via The Epoch Times,

The biggest generative AI gaffes of late, including Google Gemini’s text-to-image portrayal of “a Pope” as a woman, our Founding Fathers as Asian and black, and text responses suggesting false equivalencies between high-profile individuals such as Elon Musk and the Nazis, make for click-worthy headlines decrying Big Tech’s “clear bias,” but those don’t even begin to address the larger, and more nuanced, issue at hand.

The reason is that these hiccups resulting from an “over-correction” of Gemini’s output by its maker could be considered, in Silicon Valley parlance, a “feature” and not a “bug” of generative AI. And unless Big Tech rethinks its talent base and what constitutes “fair” and “equitable” in designing the rules around AI, we can only expect the problem to persist, and to be far harder to identify and root out in the future.

It’s important to acknowledge, though, that the development of “rules” in the creation of generative AI is not at its core scandalous, nor a secret. The entire industry, from nascent AI startups to behemoths such as Google, has been open about the more philosophical and nuanced work required to create AI innovation. Often referred to more specifically and functionally as “Responsible AI,” this work determines the “problems” that need to be addressed before the work of machine learning even begins. It’s a process and a competency that is arguably new to all tech companies playing in this space.

For Google, the work to create responsible AI “principles” began years ago, and the heads of this area share the details of their work freely. What we should take note of is that Google has solidified its “AI principles” and began training employees in the concepts as early as 2019. You can even find details about approaches and activities, such as “Moral Imagination workshops,” to see the depths of their commitment to this work. The relevance here, of course, is that Google is not the United Nations, nor even a good representative of the United States. With nearly 190,000 employees worldwide, 75 percent of Google’s employees are estimated to be younger than 30 and are self-reported as about half white and hovering around 33 percent female.

Just the fact that 7 percent of Google’s employers are older than 40 seems an exceptional disconnect when you consider that a large percentage of their demographic also identifies as Democrat, a party steadfast in its defense of President Joe Biden’s age and mental acuity at 81.

You’ll also see stated on page 11 of Google’s recent diversity report that 7 percent of employees are self-identified as “LGBQ+ and/or Trans+,” but nowhere in the 115-page document will you find mention of age or diversity outside of the standard few we’ve come to accept: race, gender, LGBTQ+, and sometimes disability or veteran status. It does raise the question: Whose eyes are we seeing this new world of AI innovation through? And what are the bigger implications for what is fair and “true”?

An even stickier is question whether these companies have the capacity or interest to change. We have all heartily bought into the mythology of the hoodie-wearing tech “bro” made famous by Mark Zuckerberg—and forever embedded in our cultural anthology via the film “The Social Network”—and the rigid framework for hiring that has been a point of pride for Silicon Valley companies for decades now.

Many of us worked within a rigid hiring framework that screens for “approved” schools or includes having employees partake in mental gymnastics no matter the job for which they are being hired. Recently, former Google employee and Silicon Valley marketing veteran Luanne Calvert shared in her TEDx Berlin talk an anecdote about barely “getting through” the hiring process herself. As a graduate of a less-prestigious college, it was only her unique skills and deep marketing experience that allowed her to be categorized as an “exception”—or, as she describes it, “an experiment.”

And although the hiring practices have changed (a bit), as Ms. Calvert notes in her talk and I’ve seen via my former colleagues in Silicon Valley, you won’t find a recruitment push anytime soon for, say, an over-60 conservative. But I hope that what we will find is that without the broadest subset of thinking represented, AI won’t meet its potential; the unpredictability of these tools will continue to foster discussion, and an inability to fully commercialize if the audience is narrow will force a reckoning.

Perhaps, in the end, it will be capitalism that ultimately saves Google from itself.

*  *  *

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times or ZeroHedge.

联系我们 contact @ memedata.com