They are Doing More Harm than Good to THESE Children

children
0 0
Read Time:2 Minute, 46 Second

Schools trying to protect children from self-injury are doing more harm than good. They’ve been using “dubious AI-powered software.” While the robot does manage to get it right once in a while, the vast majority of alerts are false alarms that waste police time. It can also be traumatic for the kids improperly flagged and a nightmare for the whole family.

Track the children

Schools should be focusing on teaching children the fundamental skills they’ll need to compete in the adult world of business and industry. Instead, they’re spending all their resources on indoctrinating students into social conformity with progressive ideals.

Practically every district in America has been hovering over the shoulder of students to snoop on their internet usage.

An artificially intelligent software called “GoGuardian Beacon” is “being installed on high school students’ school-issued devices.” It “tracks every word they type.” The algorithm “then analyzes the language for evidence of teenagers wanting to harm themselves.

Nobody is surprised to learn that it’s usually wrong, with “chaotic and traumatic results” to the children involved.

What the student says in a search query or homework assignment isn’t always what they were trying to say. The lab created entity often misinterprets what children type in. For instance one “17-year-old in Neosho, Missouri,” was dragged out of bed in the middle of the night by police.

Her parents were furious to learn that the reason was “a poem she had written years ago triggered the alarms.” Couldn’t that wait until morning, they angrily grumbled at the cops.

Nobody is surprised to learn that it’s usually wrong.

Safeguard students

The software creator swears up and down that their product really does “safeguard students from physical harm.” That’s not the way the parents of the kid in Neosho see it. “It was one of the worst experiences of her life,” the teen’s mother relates.

It’s been happening since the children all got locked down for COVID. That, critics say, led to “widespread surveillance of students in their own homes.

Part of the problem is that the keyword searches are too broad. GoGuardian and similar systems “are designed to flag keywords or phrases to figure out if a teen is planning to hurt themselves.

Children aren’t real careful about choosing search terms which won’t ring alarm bells. At the same time, they are real good at finding ways to bypass the nanny-state software.

That’s probably one of the reasons none of the AI search nanny’s will release any solid statistics. We “have no idea if they’re at all effective or accurate, since the companies have yet to release any data.” Even a blind squirrel can find a nut once in a while, the programmers claim.

Once in a while they find children having a real crisis. “Besides false alarms, schools have reported that the systems have allowed them to intervene in time before they’re at imminent risk at least some of the time.” That’s a really weak argument. The few times they get it right aren’t enough. “The software remains highly invasive and could represent a massive intrusion of privacy. Civil rights groups have criticized the tech, arguing that in most cases, law enforcement shouldn’t be involved.

About Post Author

Mark Megahan

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Related Posts