-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
101 lines (94 loc) · 6.56 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Google fonts -->
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap"
rel="stylesheet">
<link rel="stylesheet" href="styles.css">
<title>Ethical AI Imperative</title>
</head>
<body>
<div class="content">
<img src="ethical_ai_header.png" alt=" mixed media image">
<div class="header">
<h1>An Ethical AI Imperative </h1>
<p class="subhead">Unveiling and Overcoming Racial and Gender Bias in AI-Enhanced Recruitment</p>
</div>
<div class="byline">
<p>By: <a href="https://github.com/ligea-alexander">Ligea Alexander</a></p>
</div>
<p>In an era where artificial intelligence (AI) permeates every aspect of our lives, the urgency for
ethical AI couldn't be clearer. This call for ethics rings especially loud as AI ventures into hiring,
a domain ripe with transformative potential but fraught with ethical quandaries.</p>
<p> A pivotal moment that underscores this need is the work of Timnit Gebru - you know the black
AI genius who was fired from Google for shining the spotlight on AI's inherent biases.</p>
<p>Back in 2020, amidst the relentless advance of artificial intelligence (AI), Timnit Gebru, a prominent
figure in the realm of AI ethics, co-authored a pivotal paper, <a
href="https://dl.acm.org/doi/10.1145/3442188.3445922">"On the Dangers of Stochastic
Parrots: Can Language Models Be Too Big?"</a><br> The study, a collaborative effort with Emily M.
Bender and others, thrust into the spotlight the burgeoning concerns surrounding large
language models (LLMs). These AI behemoths, according to the paper, are not without their
pitfalls—ranging from their voracious environmental appetite and hefty financial toll to their
opaque nature, propensity for bias, lack of genuine language comprehension, and their potential
weaponization in the dissemination of disinformation.</p>
<p>However, the discourse surrounding the paper's findings soon took a backseat to a controversy
that erupted within the corridors of Google, the very crucible of technological innovation where
Gebru was employed. In a move that startled many, Google's management presented Gebru
with an ultimatum: retract the paper before its publication or expunge the names of Google
employees from its credits. The situation escalated when Gebru sought transparency over this
request, leading to her abrupt dismissal from Google—a move the company ambiguously
framed as her resignation.</p>
<p>Fast forward to today, with AI advancements accelerating, investigations like <a
href="https://bloom.bg/3It4cE4">
Bloomberg's </a> offer a stark revelation: these biases, specifically racial and gender
biases in simulated hiring processes, are not just theoretical concerns but tangible realities. In the
investigation, Leon Yin and his team at Bloomberg unearthed compelling evidence of AI's discriminatory
practices in hiring.<br> They found that resumes bearing names typically associated with Black Americans
were
significantly less likely to be chosen as the top candidate for roles such as financial analysts, when
compared to names that might be perceived as belonging to other racial or ethnic groups.</p>
<p> This clear indication of name-based discrimination underscores the real-world consequences of unchecked AI
biases in professional settings. It reinforces the critical warnings issued by
Timnit Gebru and others. This palpable evidence becomes particularly relevant for organizations like the <a
href="https://ceoworks.org">Center for Employment
Opportunities (CEO)</a>, which serves a predominantly African American population facing significant
barriers to
employment.</p>
<!-- <figure>
<!-- <figcaption>This is an image caption</figcaption> -->
<!-- </figure> -->
<p>Consider the justice-informed—a group historically marginalized and now standing at a
precarious intersection where AI's potential for bias meets real-world consequences. The biases
identified by Yin's study illustrate a direct threat to these individuals, underscoring the pressing
need for ethical scrutiny and proactive intervention in AI's application in hiring.</p>
<p>The Bloomberg findings (and <a
href="https://www.sciencedirect.com/science/article/pii/S2949882124000148?via%3Dihub"> others like
it</a>), revealing a marked bias against African American names in simulated hiring
scenarios, beckon a closer look at the implications for CEO's participants. This discrimination is
not just a technical glitch but a profound societal issue that demands immediate and thoughtful
action.</p>
<p>As we ponder on the path forward, the intersection of AI and hiring practices presents a critical
juncture for populations akin to those served by CEO. <br> The evidence provided by Yin, echoing
Gebru's earlier warnings, emphasizes the necessity for a vigilant and ethical approach to AI
development. It begs the question: Why was a crucial voice like Gebru's silenced, especially
when her insights are now proven to be not only valid but vital? It's imperative that AI technologies serve
to enhance
equity and opportunity, honoring the spirit of pioneers like Gebru, rather than exacerbating existing
divides.</p>
</div>
<div class="footer">
<div class="content">
<h2 class="font-mono"> <a href="https://github.com/ligea-alexander/An-Ethical-AI-Imperative"> Visit the
Repo</a></h2>
<a href="https://github.com/ligea-alexander/An-Ethical-AI-Imperative"><img class="h-24 max-w-full"
src="github-mark.png" alt="link to github repo"></a>
</div>
</div>
</div>
</body>
</html>