Back

Google’s AI Found to Have A Bug That Mixes Up Hate Messages With Good

Google has developed a ton of artificial intelligence tools, and they have all being used for real good reasons. The most primary of them is internet safety and shutting down internet trolls. But seems like there are a few glitches that still need to be fixed. A team of researchers has discovered that Google’s artificial intelligence can sometimes detect and confuse hate messages with love messages that spread positivity.

So, what is this all about? Well, back in 2016, Google released a technological incubator called Jigsaw that revealed that artificial intelligence can do just more than assign a note of toxicity to a text extract. The system was to be exposed to sentences that contain rudeness or disrespect, and the software could then determine if it is hurtful or not. But, it turns out that it is pretty easy to deceive artificial intelligence.

So, how is this actually possible? The addition of spaces and harmful words makes the software think that the text is not as violent as it looks. Adding the word love also makes the software reduce the amount of toxicity score of an extract. You can add a few toxic words too, but if you add the word love, or any other harmless word on to there, the amount of toxicity, according to Google, is reduced substantially.

Pending Work

This shows that Google really has a lot of work left with its artificial intelligence. This artificial intelligence has got to tackle not just hate speech but also fake news, that is leaving a big dent on the internet right now. Many internet and social media companies are all seeing hate speech as a rising issue, and something that they would love to see crush.

Needed Tool

And that is why a lot of expectations are up against this artificial intelligence technology by Google. If implemented into real life, this could potentially decrease the amount of hate speech and degrading words or sentences that we encounter with online on a regular basis.

But this is not the first time that an artificial intelligence is getting things wrong. A while back, Microsoft unveiled a software that could chat with Twitter users, and learn the ways on how the users interacted with others. But based on how the users interacted, in just a matter of hours, the bot became racist and also misogynistic.