Artificial intelligence in schools demands real-world responsibility

In this day and age, artificial intelligence affects almost every aspect of our lives. Whether or not your job application has been accepted or whether your insurance claim has been approved, AI gives you the power to decide which video plays next while you watch YouTube.

Whether we like it or not, our fate is often determined by algorithms that see us as a cloud of data points, not human beings. Therefore, when we apply this technology to a place where education is fundamental to our society, we must ensure that our approach is responsible and fair माणूस treating those affected by our tools as human beings.

AI Meets Edu

One of the primary applications of AI is to greatly enhance an organization’s ability to perform tasks that require certain types of reasoning. In education, this increase in capacity is already seen in many forms. Most fundamentally, the grading of multiple choice quizzes and tests is now essentially urgent. But machine learning can do a lot with that data. For example, they can show where students are thriving and where they need more academic support, or dynamically personalize instructional materials to help a child learn effectively.

However, today’s students interact with data in a very different way than our habit. Kids who have just known the world that the AI-based system is pervasive turn to search engines to find answers to questions before going to their own parents and teachers. This trend should not be news to anyone; In fact, the evidence has been shown A decade ago. And one-to-one device programs gain traction (More than 50 percent in 2017, And Climb between 80 and 90 percent during an epidemic), It is safe to assume that more students will ask their questions to Google in front of their instructors.

It is not difficult to see the child’s reasoning behind this. With so many different sources of knowledge available on the Internet, why ask a single teacher? Also, asking search engines avoids awkward or difficult conversations behind more serious questions. And this simple fact highlights both the advantages and disadvantages involved in implementing AI in education.

Asking difficult questions

Students struggling with mental health challenges who are not equipped to deal with them alone often seek out online resources to help themselves. When schools have access to that information, they can intervene, provide help, and potentially save lives.

The next generation of grief detection systems, e.g. Safely auditor And using Securly’s risk heuristic system, natural language processing, and artificial intelligence tools to estimate the emotion behind a student’s message and to use this information to interpret their mental state, including the student’s past data. It helps prevent false positives and provides a clear picture of what subjects students are most concerned about — or if they need immediate attention. If a system determines that a student is at risk, it is brought to the attention of trained human analysts. Securly’s 24 teams, Which reaches out to school-appointed emergency contacts.

The natural language process can also be applied to students’ mental health and other aspects of socio-emotional learning. Cyber ​​bullying can be analyzed in the form of comments students make to each other before sending a message, allowing the AI-enabled system to detect bullying or hate attacks and then help students understand how they can manage their emotions without hurting themselves or others.

Of course, academic companies and other actors using such sensitive data should ensure that student privacy is maintained as much as possible while avoiding tragedy and helping students in need.

Disadvantages of relying too much on AI

In addition, we need to be aware of the issues inherent in this approach, such as AI bias. This is a problem in automated systems In the early days of the modern computer. More recently, concerns have been raised that AI in some learning management systems may misrepresent a student as underperforming, leading to inappropriate behavior in educational settings.

AI bias can manifest itself in different ways. When designing any AI tool, it is fundamentally important to make sure that the system is being audited for bias. It is also important to understand that prejudice can grow unexpectedly, even if measures are taken to prevent it.

As AI systems enter new spaces, finding consensus on what is morally acceptable is becoming increasingly difficult. It is clear that allowing AI tools to support or monitor a child’s socio-emotional development is not a light decision. Still, it is important to find solutions that can help students in need.

The AI ​​system used in these contexts should, primarily, work in advisory capacity. While the AI ​​can identify when a student needs mental health care, the district and school staff should provide the necessary support. AI can be a wonderful tool to help students grow and prosper, but it needs a human touch to be truly effective.

Add a Comment

Your email address will not be published. Required fields are marked *