Teaching students to make better choices in an algorithm-driven world

In January, Colby College announced the establishment of the Davis Institute for Artificial Intelligence, calling it “the first cross-disciplinary institution for artificial intelligence in the Liberal Arts College.” This is because no other liberal arts college has been involved in such activities. The role of these institutions is to train graduates to live in a democratic society. In contrast, AI centers such as Stanford Artificial Intelligence Laboratory have largely focused on high-quality, specialized training for undergraduate students in complex mathematical and computer engineering fields. What can small, liberal arts colleges offer in response?

A has a hint Statement of the first director of the Davis Institute, Natural language processing expert Amanda Stent. “The broad and profound social impact of AI will continue, which means the whole community has a say in what we do with it. For that to happen, each of us needs to have a basic understanding of the nature of this technology, ”she said.

What is the “basic understanding” of artificial intelligence? Can you really understand the tangled neural network under driverless vehicles without taking advanced calculations? Do most of us need to understand it in depth or in general?

There may be a similar analogy to asking whether you need to train mechanics and automotive designers or just people who can drive cars responsibly.

If it is the first, most liberal arts colleges are deprived. Many of them have to struggle to hire and retain people with technical knowledge and experience to teach in the field. Someone proficient in algorithmic design is living a good life in the industry or working in a large, well-funded organization that demands large scientific undertakings.

If it is another, most small liberal arts colleges are equipped to train students about the social and moral challenges that artificial intelligence presents. These colleges specialize in providing a comprehensive education that trains people not only to acquire technical skills for the staff but also to become full, fully integrated citizens. Increasingly, it will include wrestling with the proper social use of machine learning in a world driven by algorithms, artificial intelligence and expanded data.

In a wonderful Article, Neir Isikovits and Dan Feldman, two researchers at the Boston Applied Ethics Center at the University of Massachusetts, identify the main threat to society driven by our algorithm: the loss of human ability to make good choices. Aristotle says so Pronunciation, The art of how to live well in society with others. The only way Aristotle gained this knowledge was through habit, the experience of interacting with others in different situations. Because of machine choice rather than human choice, we risk losing the opportunity to develop civic wisdom. As algorithms increasingly choose what we see, hear, or listen to on social media, we lose the practice of choosing. It doesn’t matter if you consider tonight’s Netflix choice, but it has more global implications. If we do not choose our entertainment, does it affect our ability to make moral choices?

Isikovits and Feldman pose a provocative question: If humans cannot achieve phonation, do we fail to justify the high respect given to John Locke and other philosophers in the natural rights tradition for his ability to self-govern? Do we lose the ability to be self-governing? Or, perhaps more importantly, do we lose the ability to know when the ability to govern is taken away from us? Liberal arts can equip you with the tools you need to develop fronesis.

But without a basic understanding of how this technology works, is there a liberal art major in applying their “wisdom” to changing reality? Instead of arguing about whether there is a need for people who have read Chaucer or who understand what a gradient decent is, we should train people to do both. Colleges should take the lead in training students to adopt a “technological policy” that incorporates working knowledge of AI, including liberal art knowledge, to understand how to position itself in an AI-driven world. This means not only being able to “drive the car responsibly” but also understanding how the internal combustion engine works.

Undoubtedly, attachment to this technology can and should be woven throughout the curriculum, not only in specialized subject courses such as “philosophy of technology” or “literature surveillance” but also in introductory courses and as part of the core curriculum of all subjects. But that is not enough. Professors in these courses need special training to develop or use frameworks, metaphors, and similes that explain the ideas behind artificial intelligence without the need for high-level computer or mathematical knowledge.

In my own case, I try to teach students to become algorithmically literate in a political science course with the subtitle “Algorithms, Data and Politics”. The curriculum includes the collection and analysis of data in ways that create unprecedented challenges and opportunities for the distribution of power, equality and justice. In this class, I speak in metaphors and metaphors to explain complex concepts. For example, it explains a neural network like a huge panel with thousands of dials (each one showing a feature or parameter) that is fine-tuned thousands of times a second to give the desired result. I talk about dataification and efforts to predict users as a kind of “factory farming” where variability affecting “production” is minimized.

Are these perfect analogies? No. I’m sure I’ve missed the main element of my description, partly through the design to encourage serious thinking. But the option is not right. A society of people who have no idea how AI, algorithms and machine learning work is a caught up and handled society. Only mathematicians and computer scientists have the ability to talk about these tools, so we can’t set the bar for understanding. Or our training may not be so base-level that students develop imperfect and misleading (e.g. techno-utopian or techno-dystopian) ideas for the future. Just as liberal art emphasizes breadth, wisdom, and human development, we need AI training for a society that is consciously inefficient.

Therefore Notre Dame Humanities Professor Mark Roche notes“The college experience is a once-in-a-lifetime opportunity for many to ask the best questions without being overwhelmed by the distractions of material needs and practical applications.” Liberal art education provides a basic foundation that, in its stability, allows students to navigate this increasingly fast-paced, confusing world. Knowledge of the classics, appreciation of art and letters, and recognition of how the physical and human sciences work are enduring features that serve students well at any age. But with the increasing complexity of the tools that control our lives, we need to be more purposeful in asking the “great questions.”

Add a Comment

Your email address will not be published. Required fields are marked *