ملاحظات
الفصل الأول: أيتها المرآة على الحائط
(2)
See the case of Paul Zilly as told by Fry (2018, 71-72).
More details in Julia Angwin, Jeff Larson, Surya Mattu and Lauren
Kirchner, “Machine Bias,” ProPublica, May 23, 2016,
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
(3)
For example, in 2016 a local police zone in Belgium started
using predictive policing software to predict burglaries and vehicle
theft (Algorithm Watch 2019, 44).
(4)
BuzzFeedVideo, “You Won’t Believe What Obama Says in this
Video!”
https://www.youtube.com/watch?v=cQ54GDm1eL0&fbclid=IwAR1oD0AlopEZa00XHo3WNcey_qNnNqTsvHN_aZsNb0d2t9cmsDbm9oCfX8A.
الفصل الثاني: الذكاء الفائق والوحوش ونهاية العالم بالذكاء الاصطناعي
(1)
Some talk of taming or domesticating AI, although the
analogy with wild animals is problematic, if only because in contrast to
the “wild” AI some imagine, animals are limited by their natural
faculties and can be trained and developed only up to some point (Turner
2019).
(2)
It is often suggested that Mary Shelley must have been
influenced by her parents, who discussed politics, philosophy, and
literature, but also science, and by her partner Percy Bysshe Shelley,
who was an amateur scientist especially interested in
electricity.
الفصل الثالث: كل ما له علاقة بالبشر
(1)
Dreyfus was influenced by Edmund Husserl, Martin Heidegger,
and Maurice Merleau-Ponty.
الفصل الرابع: أهي حقًّا مجرد آلات؟
(1)
A real-world case of this was the robot dog Spot who was
kicked by its developers to test it, something that met with
surprisingly empathetic responses:
https://www.youtube.com/watch?v=aR5Z6AoMh6U.
الفصل الخامس: التكنولوجيا
(1)
See
https://www.humanbrainproject.eu/en/.
(2)
See, for example, the European Commission’s AI High Level
Expert Group’s (2018) definition of AI.
الفصل السادس: لا تنسَ (علم) البيانات
(2)
Concrete examples such as Facebook, Walmart, American
Express, Hello Barbie, and BMW are drawn from Marr
(2018).
الفصل الثامن: لامسئوليةُ الآلات والقرارات غير المُبررة
(1)
One could ask, however, if decisions made by AIs really
count as decisions, and if so, if there is a difference in the kind of
decisions we delegate or should delegate to AIs. In this sense, the
problem regarding responsibility of or for AI raises the very question
of what a decision is. The problem also connects with issues about
delegation: we delegate decisions to machines. But what does this
delegation entail in terms of responsibility?
(2)
Indeed, this case is more complicated since one could argue
that the delegate is then still responsible for that particular task—at
least to some extent—and it may not be clear how the responsibility is
distributed in such cases.
(3)
Note that this was and is not always the case; as Turner
(2019) reminds us, there are cases of animals being
punished.
الفصل التاسع: التحيز ومعنى الحياة
(1)
Thanks to Bill Price for the thought
experiment.
الفصل العاشر: السياسات المقترحة
(1)
See:
https://www.acrai.at/en/.
(2)
The resolution can be found here:
http://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html?redirect#title1.
(4)
See:
https://www.partnershiponai.org/.
(10)
See:
https://www.stopkillerrobots.org/.
(11)
See:
https://futureoflife.org/ai-principles/.
(12)
Consider people such as Batya Friedman and Helen Nissenbaum
in the United States, and later Jeroen van den Hoven and others in the
Netherlands, who have been championing the ethical design of technology
for some time.
الفصل الحادي عشر: التحديات التي تُواجه صانعي السياسات
الفصل الثاني عشر: تحدِّي تغيُّر المناخ: حول الأولويات وحقبة التأثير البشري
(1)
See: https://hai.stanford.edu/ and
https://hcai.mit.edu.