AI and turning off thought

https://files.catbox.moe/6xs4km.png

This is a LinkedIn post by a charming person with whom I used to work. The person uses English as their first language.

https://files.catbox.moe/bnj21f.png

This is a Pangram analysis of the entire LinkedIn post: the entire post is most likely generated by AI.

https://files.catbox.moe/vlcait.png

This is a reaction on the post. A person whom I respect claims to love the post.

What does the post say about the human who published the post? About the one who loved the post?

Every human makes mistakes. However, using AI turns off thought, often and notably critical thought. A six-month-old human innately reflects and learns; AI is just autocorrect on steroids, built on top of a fraction of the data that passes through an average human during a day.

The more humans use a sycophantic bullshit generator, the more they succumb to its allure. This is natural in bullshit.

Amazon now mandate AI-generated code as used by junior and mid-level programmers to be reviewed before it crashes their own systems.

Microsoft own LinkedIn. The Microsoft CEO claims to only use AI chatbots instead of reading email, which should result in getting fired.

Alas, here we are. AI has its uses, but is rarely worth it, mainly because a single AI interaction takes a monumental toll on the climate and often results in erroneous results.

If people would ask strangers on the street about certain things, they might get wrong answers. Would those people be sycophants and liars? Maybe, but it's not likely that they would make the repetitive and idiot-like 'mistakes' that are made by popular AI chatbots that are trained on mainly stolen data.

Would you befriend AI?

#ArtificialIntelligence