<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>artificialintelligence &amp;mdash; Niklas&#39;s thoughts</title>
    <link>https://thoughts.pivic.com/tag:artificialintelligence</link>
    <description>Music and other stuff</description>
    <pubDate>Mon, 11 May 2026 15:17:05 +0200</pubDate>
    <item>
      <title>Book reviews status and why AI is worse than drug addiction</title>
      <link>https://thoughts.pivic.com/book-reviews-status-and-why-ai-is-worse-than-drug-addiction</link>
      <description>&lt;![CDATA[book&#xA;&#xA;There was recently an interesting article published in the New York Times: Where Have All the Book Reviews Gone?&#xA;&#xA;  It’s a grim business to linger on the numbers. In the 1960s, a good first novel might receive 90 individual newspaper reviews in America and England, the novelist Reynolds Price wrote in his memoir “Ardent Spirits.” By 2009, the year “Ardent Spirits” was issued, he reckoned the number was 20 at best. What would it be now? Two? Three?&#xA;    A few magazines, of course, still run inspired book criticism; essential trees are still standing though the vast underbrush is gone. And the online discourse has its moments. But here’s another number: Not long ago, someone estimated that there were seven full-time book critics left.) in America. With The Post’s Book World gone, that number has dropped to five.&#xA;    As a lonely and shellshocked survivor of this decimation, I find it hard not to envy the critics in London, which still has at least seven daily or Sunday papers in which a serious author might hope for a review. The literary debate over there is more like a boisterous dinner party and less like a Morse code dispatch between distant frigates passing in the night.&#xA;&#xA;AI will, naturally, never replace humanity; even if Skynet happens and every single homo sapiens is physically murdered by machines, there&#39;s no replacement for people like Toni Morrison, Lester Bangs, Anthony Lane. From the article:&#xA;&#xA;  But here’s a catch with A.I. It’s easy to tell when a reference, or a comparison, or a sentence, doesn’t belong to a writer. Erudition and style aren’t forgeable for long; they still must be earned. As for A.I.’s sleek, space-efficient text, we’ve already grown accustomed to what that sounds like — the flat, consistent tone, the pert little summary bits, the repetitions, the impersonal and fluorescent-lit mood. Reading it, you feel you’ve been through the desert on a horse with no name.&#xA;&#xA;At times, I&#39;ve used Pangram, the AI detector service, to see how much some people are using AI. A former mountebank manager of mine used to answer team chat messages by physically scurrying away and then regurgitating something that AI handed to him without really knowing what he did. It reminds me of this video.&#xA;&#xA;Is there a difference between people who use AI and people who are addicted to drugs? People who do drugs either want to feel something they can&#39;t feel without drugs or they want to feel nothing; people who use AI want to outsource thought and also outsource their ability to feel.&#xA;&#xA;When doing drugs, there&#39;s a toll on yourself and other people.&#xA;&#xA;When doing AI, the climate catastrophe marches on and you still have to reverse-engineer a pile of slop to be able to use any of what&#39;s usable.&#xA;&#xA;Speaking for myself, the use of AI is often far worse than doing destructive drugs. I&#39;m not kidding.&#xA;&#xA;#ArtificialIntelligence #drugs #books #BookReviews ]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://cdn.dribbble.com/userupload/19646652/file/original-b84277d6110f0722a534324ac2c977a8.gif" alt="book"></p>

<p>There was recently an interesting article published in the New York Times: <a href="https://www.nytimes.com/2026/04/27/books/review/ai-book-reviews.html">Where Have All the Book Reviews Gone?</a></p>

<blockquote><p>It’s a grim business to linger on the numbers. In the 1960s, a good first novel might receive 90 individual newspaper reviews in America and England, the novelist Reynolds Price wrote in his memoir “Ardent Spirits.” By 2009, the year “Ardent Spirits” was issued, he reckoned the number was 20 at best. What would it be now? Two? Three?</p>

<p>A few magazines, of course, still run inspired book criticism; essential trees are still standing though the vast underbrush is gone. And the online discourse has its moments. But here’s another number: Not long ago, someone estimated that there were <a href="https://worldliteraturetoday.org/2025/september/criticism-literature-why-it-vanishing-adam-morgan#:~:text=In%20fact%2C%20as,occasionally%20covers%20TV).">seven full-time book critics left</a> in America. With The Post’s Book World gone, that number has dropped to five.</p>

<p>As a lonely and shellshocked survivor of this decimation, I find it hard not to envy the critics in London, which still has at least seven daily or Sunday papers in which a serious author might hope for a review. The literary debate over there is more like a boisterous dinner party and less like a Morse code dispatch between distant frigates passing in the night.</p></blockquote>

<p>AI will, naturally, never replace humanity; even if Skynet happens and every single homo sapiens is physically murdered by machines, there&#39;s no replacement for people like Toni Morrison, <a href="https://en.wikipedia.org/wiki/Psychotic_Reactions_and_Carburetor_Dung">Lester Bangs</a>, <a href="https://www.penguinrandomhouse.com/books/97560/nobodys-perfect-by-anthony-lane/">Anthony Lane</a>. From the article:</p>

<blockquote><p>But here’s a catch with A.I. It’s easy to tell when a reference, or a comparison, or a sentence, doesn’t belong to a writer. Erudition and style aren’t forgeable for long; they still must be earned. As for A.I.’s sleek, space-efficient text, we’ve already grown accustomed to what that sounds like — the flat, consistent tone, the pert little summary bits, the repetitions, the impersonal and fluorescent-lit mood. Reading it, you feel you’ve been through the desert on a horse with no name.</p></blockquote>

<p>At times, I&#39;ve used <a href="https://www.pangram.com/">Pangram</a>, the AI detector service, to see how much some people are using AI. A former mountebank manager of mine used to answer team chat messages by physically scurrying away and then regurgitating something that AI handed to him without really knowing what he did. It reminds me of <a href="https://loops.video/v/fkKs61UJbx">this video</a>.</p>

<p>Is there a difference between people who use AI and people who are addicted to drugs? People who do drugs either want to feel something they can&#39;t feel <em>without</em> drugs or they want to feel nothing; people who use AI want to outsource thought and <em>also</em> outsource their ability to feel.</p>

<p>When doing drugs, there&#39;s a toll on yourself and other people.</p>

<p>When doing AI, the climate catastrophe marches on and you still have to reverse-engineer a pile of slop to be able to use any of what&#39;s usable.</p>

<p>Speaking for myself, the use of AI is often far worse than doing destructive drugs. I&#39;m not kidding.</p>

<p><a href="https://thoughts.pivic.com/tag:ArtificialIntelligence" class="hashtag"><span>#</span><span class="p-category">ArtificialIntelligence</span></a> <a href="https://thoughts.pivic.com/tag:drugs" class="hashtag"><span>#</span><span class="p-category">drugs</span></a> <a href="https://thoughts.pivic.com/tag:books" class="hashtag"><span>#</span><span class="p-category">books</span></a> <a href="https://thoughts.pivic.com/tag:BookReviews" class="hashtag"><span>#</span><span class="p-category">BookReviews</span></a></p>
]]></content:encoded>
      <guid>https://thoughts.pivic.com/book-reviews-status-and-why-ai-is-worse-than-drug-addiction</guid>
      <pubDate>Wed, 29 Apr 2026 07:44:27 +0200</pubDate>
    </item>
    <item>
      <title>Elliott Smith and Ludwig Wittgenstein</title>
      <link>https://thoughts.pivic.com/elliott-smith-and-ludwig-wittgenstein</link>
      <description>&lt;![CDATA[Elliott cover&#xA;&#xA;I&#39;m reading a coming biography about Elliott Smith, named Nobody Broke Your Heart: An Intimate Biography of Elliott Smith.&#xA;&#xA;So far, I&#39;ve only read the introduction. It bears the hallmarks of a great fucking book.&#xA;&#xA;  Twenty-three years after his death, Elliott still isn’t particularly well known, or well understood, but he is terribly loved. The task of understanding and preserving his legacy has become a collective project. There are YouTube accounts like I Remember Elliott; his old fan site Sweet Adeline, defiantly mired in Web 1.0; oral-history blogs like so flawed and drunk and perfect still; Smiling at Confusion, a site for posting guitar tabs, guidance on fingerings and chords. Fans share bootleg recordings and unreleased songs, reflections on his lyrics or entreaties for help understanding them, and they speculate darkly on his death. Below videos you’ll find hundreds of comments, people gushing over Elliott’s fingerpicking, arguing about whether he’s on something at this concert or just tired but clean, thanking him for accompanying them through depression or addiction, for making them feel less alone.&#xA;&#xA;What is immediate, what is human, that is love.&#xA;&#xA;---&#xA;&#xA;Recently, a friend asked me to run some of their works through AI to see what it would create. Instantly it generated some seemingly worthwhile stuff but in actuality, AI is autocorrect on steroids. My friend isn&#39;t very knowledgeable around AI but they produce stuff that&#39;s, frankly, some of the best I&#39;ve ever read and heard in their &#39;fields&#39;.&#xA;&#xA;Three recent articles about AI:&#xA;&#xA;AI overly affirms users asking for personal advice&#xA;Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender&#xA;The AI Industry Is Lying To You&#xA;&#xA;Ludwig Wittgenstein&#xA;Ludwig Wittgenstein&#xA;&#xA;Wittgenstein is one of my favourite modern philosophers. I highly recommend Ray Monk&#39;s beautiful Ludwig Wittgenstein: The Duty of Genius.&#xA;&#xA;A recently-published article on large language models (LLMs) as they relate to Wittgenstein&#39;s views on language, semantics, and mathematics is very interesting indeed.&#xA;&#xA;  When Wittgenstein referred to the “beginning of the end of humanity,” he was not envisioning sci-fi cataclysms on the order of The Matrix or The Terminator or even Dr. Strangelove. He was referring to the end of humanity not primarily in terms of its biological survival, but in terms of what he called the “form of life” we inhabit. That form of life is threatened not so much by industrialization, nukes, robots, or AI agents as by a way of thinking that lowers human life to the plane of science and technology. Wittgenstein’s attempt to draw attention to that way of thinking—and dissuade us from it—is of the utmost importance in an era where the developing AI ideology threatens to further distort our understanding of how we use language and how we live.&#xA;&#xA;A more in-depth excerpt from the wondrously and sharply written article:&#xA;&#xA;  The parts of the Investigations where Wittgenstein probes our concepts of thinking and understanding can help us escape the conceptual muddles that plague discussions and debates over AI and so-called “artificial general intelligence.”&#xA;    “One of the great sources of philosophical bewilderment,” according to Wittgenstein, arises when a noun like “meaning” or “number” “makes us look for a thing that corresponds to it.” We assume that our language works principally by way of reference, so that where there is a noun there must be a thing it points to. But referring to objects is just one of language’s many functions or games. Instead of looking for the things behind our words, Wittgenstein proposes studying the grammar of the language game: the role words play—and don’t play—in these activities. &#xA;    When we reflect on words like “meaning,” “thinking,” “understanding,” and “reasoning,” Wittgenstein argues, a certain picture immediately enters our heads: an internal process existing in the brain or mind that enables or somehow gives life to outwardly meaningful expressions. But, Wittgenstein asks, “What really comes before our mind when we understand a word?” Is it a kind of picture, so that I see an image of a pen in my mind’s eye when I hear the word “pen”? Do I then compare my inner picture to my experience of the outer world in order to determine whether it would be appropriate to use the word “pen”? Does some correspondence between this internal process and my expression “pen” somehow constitute the meaning?&#xA;    The idea of meaning as an internal process seems unproblematic at first, even unavoidable, but, as Wittgenstein shows, it’s not clear what role such a process would actually be playing. He asks his reader, for example, to “say: ‘Yes, this pen is blunt. Oh well, it’ll do.’ First, with thought; then without thought; then just think the thought without the words.” Having conducted these absurd self-examinations, Wittgenstein asks us to reflect, “What did the thought, as it existed before its expression, consist in?” &#xA;    His point is that our intuitive idea of meaning as an inner correlate of our outward expressions breaks down when it is taken as something like a scientific theory for what’s really going on when we use language. This failure shouldn’t surprise us. Our language did not evolve for scientific or metaphysical purposes, but just to help us make do and get along in the real world.&#xA;    The picture of thought as an internal process accompanying our use of language is just that: a picture. It is unproblematic insofar as it arises in everyday language, as when I clarify a misunderstanding by telling you, after you’ve mistakenly handed me a red pen on the desk, “No, I meant that blue pen on the bookshelf.” But that sentence is not a claim about the state of my brain a moment ago; it could not be confirmed or disconfirmed by some kind of retroactive brain scan. It’s merely a way to advance a practical project that has gone off the rails. If it’s anywhere, meaning is in that project, not in my brain.&#xA;    Of course, we might imagine that some industrious cognitive scientist equipped with the latest in brain-imaging technology might actually try to establish a causal connection between a particular brain state and the correct usage of the word “pen.” But even in that case, would it be correct to say that with a coordinated set of brain images we’ve in some sense located the meaning of the word “pen”? In what sense would the internal state that shows up on the scan explain the use or understanding of the word? Would it be analogous to the way the properties of an internal combustion engine can help explain the forward motion of a car?&#xA;    This example shows how strange it is to use an examination of brain states instead of actual behavior as a criterion for ascribing understanding. If we’re looking for understanding and meaning, Wittgenstein thinks, we will find them in the various things we do with language and not in some internal process that accompanies our use of language.&#xA;    This is just one of the strategies Wittgenstein uses to try to dissuade his readers from a mechanical, pseudoscientific understanding of language as it is embedded in human practices. The Investigations doesn’t attempt to refute this false understanding by formal, analytic argumentation the way a scientist or science-imitating philosopher might. Wittgenstein instead tries to show its limitations. His makeshift strategies—describing language games, imagining dialogues, conducting thought experiments, and drawing analogies—show how the scientific worldview has strayed from narrowly defined areas where it actually has purchase and started to distort our understanding of domains where it doesn’t belong.&#xA;&#xA;---&#xA;&#xA;It all reminds me of Noam Chomsky&#39;s discussion with Michel Gondry in the documentary Is the Man Who Is Tall Happy?. The documentary is packed with discussion on matters like universal grammar, but I remember Chomsky talking about how a small child can see a tall man who&#39;s happy; the child immediately knows that man is happy, and can draw parallels that allow the child to equally immediately know that not every man who is tall is happy, nor that every man is happy. The two-year-old Watumull/Roberts/Chomsky article The False Promise of ChatGPT says much about this.&#xA;&#xA;iframe width=&#34;560&#34; height=&#34;315&#34; src=&#34;https://www.youtube-nocookie.com/embed/nbYMmJrdXbY?si=wW1zX9lmmMYa1zsK&#34; title=&#34;YouTube video player&#34; frameborder=&#34;0&#34; allow=&#34;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&#34; referrerpolicy=&#34;strict-origin-when-cross-origin&#34; allowfullscreen/iframe&#xA;&#xA;It&#39;s not hard to know where happiness is found. To experience happiness is another thing, and AI won&#39;t help us there.&#xA;&#xA;#ArtificialIntelligence #music #ElliottSmith #NoamChomsky #LudwigWittgenstein]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://files.catbox.moe/kh0cuc.jpg" alt="Elliott cover"></p>

<p>I&#39;m reading a coming biography about Elliott Smith, named <em>Nobody Broke Your Heart: An Intimate Biography of Elliott Smith</em>.</p>

<p>So far, I&#39;ve only read the introduction. It bears the hallmarks of a great fucking book.</p>

<blockquote><p>Twenty-three years after his death, Elliott still isn’t particularly well known, or well understood, but he is terribly loved. The task of understanding and preserving his legacy has become a collective project. There are YouTube accounts like I Remember Elliott; his old fan site Sweet Adeline, defiantly mired in Web 1.0; oral-history blogs like so flawed and drunk and perfect still; Smiling at Confusion, a site for posting guitar tabs, guidance on fingerings and chords. Fans share bootleg recordings and unreleased songs, reflections on his lyrics or entreaties for help understanding them, and they speculate darkly on his death. Below videos you’ll find hundreds of comments, people gushing over Elliott’s fingerpicking, arguing about whether he’s on something at this concert or just tired but clean, thanking him for accompanying them through depression or addiction, for making them feel less alone.</p></blockquote>

<p>What is immediate, what is human, that is love.</p>

<hr>

<p>Recently, a friend asked me to run some of their works through AI to see what it would create. Instantly it generated some seemingly worthwhile stuff but in actuality, AI is autocorrect on steroids. My friend isn&#39;t very knowledgeable around AI but they produce stuff that&#39;s, frankly, some of the best I&#39;ve ever read and heard in their &#39;fields&#39;.</p>

<p>Three recent articles about AI:</p>
<ul><li><a href="https://news.stanford.edu/stories/2026/03/ai-advice-sycophantic-models-research">AI overly affirms users asking for personal advice</a></li>
<li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender</a></li>
<li><a href="https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/">The AI Industry Is Lying To You</a></li></ul>

<p><img src="https://files.catbox.moe/5t5jut.jpeg" alt="Ludwig Wittgenstein">
<em>Ludwig Wittgenstein</em></p>

<p>Wittgenstein is one of my favourite modern philosophers. I highly recommend Ray Monk&#39;s beautiful <em><a href="https://citylights.com/biography-memoir/ludwig-wittgenstein-duty-of-genius">Ludwig Wittgenstein: The Duty of Genius</a></em>.</p>

<p>A <a href="https://www.commonwealmagazine.org/wittgenstein-apocalypse-ludwig-stern-ai-artificial-intelligence-technology">recently-published article</a> on large language models (LLMs) as they relate to Wittgenstein&#39;s views on language, semantics, and mathematics is very interesting indeed.</p>

<blockquote><p>When Wittgenstein referred to the “beginning of the end of humanity,” he was not envisioning sci-fi cataclysms on the order of <em>The Matrix</em> or <em>The Terminator</em> or even <em>Dr. Strangelove</em>. He was referring to the end of humanity not primarily in terms of its biological survival, but in terms of what he called the “form of life” we inhabit. That form of life is threatened not so much by industrialization, nukes, robots, or AI agents as by a way of thinking that lowers human life to the plane of science and technology. Wittgenstein’s attempt to draw attention to that way of thinking—and dissuade us from it—is of the utmost importance in an era where the developing AI ideology threatens to further distort our understanding of how we use language and how we live.</p></blockquote>

<p>A more in-depth excerpt from the wondrously and sharply written article:</p>

<blockquote><p>The parts of the Investigations where Wittgenstein probes our concepts of thinking and understanding can help us escape the conceptual muddles that plague discussions and debates over AI and so-called “artificial general intelligence.”</p>

<p>“One of the great sources of philosophical bewilderment,” according to Wittgenstein, arises when a noun like “meaning” or “number” “makes us look for a thing that corresponds to it.” We assume that our language works principally by way of reference, so that where there is a noun there must be a thing it points to. But referring to objects is just one of language’s many functions or games. Instead of looking for the things behind our words, Wittgenstein proposes studying the grammar of the language game: the role words play—and don’t play—in these activities.</p>

<p>When we reflect on words like “meaning,” “thinking,” “understanding,” and “reasoning,” Wittgenstein argues, a certain picture immediately enters our heads: an internal process existing in the brain or mind that enables or somehow gives life to outwardly meaningful expressions. But, Wittgenstein asks, “What really comes before our mind when we understand a word?” Is it a kind of picture, so that I see an image of a pen in my mind’s eye when I hear the word “pen”? Do I then compare my inner picture to my experience of the outer world in order to determine whether it would be appropriate to use the word “pen”? Does some correspondence between this internal process and my expression “pen” somehow constitute the meaning?</p>

<p>The idea of meaning as an internal process seems unproblematic at first, even unavoidable, but, as Wittgenstein shows, it’s not clear what role such a process would actually be playing. He asks his reader, for example, to “say: ‘Yes, this pen is blunt. Oh well, it’ll do.’ First, with thought; then without thought; then just think the thought without the words.” Having conducted these absurd self-examinations, Wittgenstein asks us to reflect, “What did the thought, as it existed before its expression, consist in?”</p>

<p>His point is that our intuitive idea of meaning as an inner correlate of our outward expressions breaks down when it is taken as something like a scientific theory for what’s really going on when we use language. This failure shouldn’t surprise us. Our language did not evolve for scientific or metaphysical purposes, but just to help us make do and get along in the real world.</p>

<p>The picture of thought as an internal process accompanying our use of language is just that: a picture. It is unproblematic insofar as it arises in everyday language, as when I clarify a misunderstanding by telling you, after you’ve mistakenly handed me a red pen on the desk, “No, I meant that blue pen on the bookshelf.” But that sentence is not a claim about the state of my brain a moment ago; it could not be confirmed or disconfirmed by some kind of retroactive brain scan. It’s merely a way to advance a practical project that has gone off the rails. If it’s anywhere, meaning is in that project, not in my brain.</p>

<p>Of course, we might imagine that some industrious cognitive scientist equipped with the latest in brain-imaging technology might actually try to establish a causal connection between a particular brain state and the correct usage of the word “pen.” But even in that case, would it be correct to say that with a coordinated set of brain images we’ve in some sense located the meaning of the word “pen”? In what sense would the internal state that shows up on the scan explain the use or understanding of the word? Would it be analogous to the way the properties of an internal combustion engine can help explain the forward motion of a car?</p>

<p>This example shows how strange it is to use an examination of brain states instead of actual behavior as a criterion for ascribing understanding. If we’re looking for understanding and meaning, Wittgenstein thinks, we will find them in the various things we do with language and not in some internal process that accompanies our use of language.</p>

<p>This is just one of the strategies Wittgenstein uses to try to dissuade his readers from a mechanical, pseudoscientific understanding of language as it is embedded in human practices. The Investigations doesn’t attempt to refute this false understanding by formal, analytic argumentation the way a scientist or science-imitating philosopher might. Wittgenstein instead tries to show its limitations. His makeshift strategies—describing language games, imagining dialogues, conducting thought experiments, and drawing analogies—show how the scientific worldview has strayed from narrowly defined areas where it actually has purchase and started to distort our understanding of domains where it doesn’t belong.</p></blockquote>

<hr>

<p>It all reminds me of Noam Chomsky&#39;s discussion with Michel Gondry in the documentary <em><a href="https://letterboxd.com/film/is-the-man-who-is-tall-happy">Is the Man Who Is Tall Happy?</a></em>. The documentary is packed with discussion on matters like universal grammar, but I remember Chomsky talking about how a small child can see a tall man who&#39;s happy; the child immediately knows <em>that man</em> is happy, and can draw parallels that allow the child to equally immediately know that not every man who is tall is happy, nor that every man is happy. The two-year-old Watumull/Roberts/Chomsky article <em><a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">The False Promise of ChatGPT</a></em> says much about this.</p>

<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/nbYMmJrdXbY?si=wW1zX9lmmMYa1zsK" title="YouTube video player" frameborder="0" allowfullscreen=""></iframe>

<p>It&#39;s not hard to know where happiness is found. To experience happiness is another thing, and AI won&#39;t help us there.</p>

<p><a href="https://thoughts.pivic.com/tag:ArtificialIntelligence" class="hashtag"><span>#</span><span class="p-category">ArtificialIntelligence</span></a> <a href="https://thoughts.pivic.com/tag:music" class="hashtag"><span>#</span><span class="p-category">music</span></a> <a href="https://thoughts.pivic.com/tag:ElliottSmith" class="hashtag"><span>#</span><span class="p-category">ElliottSmith</span></a> <a href="https://thoughts.pivic.com/tag:NoamChomsky" class="hashtag"><span>#</span><span class="p-category">NoamChomsky</span></a> <a href="https://thoughts.pivic.com/tag:LudwigWittgenstein" class="hashtag"><span>#</span><span class="p-category">LudwigWittgenstein</span></a></p>
]]></content:encoded>
      <guid>https://thoughts.pivic.com/elliott-smith-and-ludwig-wittgenstein</guid>
      <pubDate>Tue, 31 Mar 2026 07:23:52 +0200</pubDate>
    </item>
    <item>
      <title>AI and turning off thought</title>
      <link>https://thoughts.pivic.com/ai-and-turning-off-thought</link>
      <description>&lt;![CDATA[https://files.catbox.moe/6xs4km.png&#xA;&#xA;This is a LinkedIn post by a charming person with whom I used to work. The person uses English as their first language.&#xA;&#xA;https://files.catbox.moe/bnj21f.png&#xA;&#xA;This is a Pangram analysis of the entire LinkedIn post: the entire post is most likely generated by AI.&#xA;&#xA;https://files.catbox.moe/vlcait.png&#xA;&#xA;This is a reaction on the post. A person whom I respect claims to love the post.&#xA;&#xA;What does the post say about the human who published the post? About the one who loved the post?&#xA;&#xA;Every human makes mistakes. However, using AI turns off thought, often and notably icritical/i thought. A six-month-old human innately reflects and learns; AI is just autocorrect on steroids, built on top of a fraction of the data that passes through an average human during a day.&#xA;&#xA;The more humans use a sycophantic bullshit generator, the more they succumb to its allure. This is natural in bullshit.&#xA;&#xA;Amazon now mandate AI-generated code as used by junior and mid-level programmers to be reviewed before it crashes their own systems.&#xA;&#xA;Microsoft own LinkedIn. The Microsoft CEO claims to only use AI chatbots instead of reading email, which should result in getting fired.&#xA;&#xA;Alas, here we are. AI has its uses, but is rarely worth it, mainly because a single AI interaction takes a monumental toll on the climate and often results in erroneous results.&#xA;&#xA;If people would ask strangers on the street about certain things, they might get wrong answers. Would those people be sycophants and liars? Maybe, but it&#39;s not likely that they would make the repetitive and idiot-like &#39;mistakes&#39; that are made by popular AI chatbots that are trained on mainly stolen data.&#xA;&#xA;Would you befriend AI?&#xA;&#xA;ArtificialIntelligence]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://files.catbox.moe/6xs4km.png" alt="https://files.catbox.moe/6xs4km.png"></p>

<p>This is a LinkedIn post by a charming person with whom I used to work. The person uses English as their first language.</p>

<p><img src="https://files.catbox.moe/bnj21f.png" alt="https://files.catbox.moe/bnj21f.png"></p>

<p>This is a Pangram analysis of the entire LinkedIn post: the entire post is most likely generated by AI.</p>

<p><img src="https://files.catbox.moe/vlcait.png" alt="https://files.catbox.moe/vlcait.png"></p>

<p>This is a reaction on the post. A person whom I respect claims to love the post.</p>

<p>What does the post say about the human who published the post? About the one who loved the post?</p>

<p>Every human makes mistakes. However, using AI turns off thought, often and notably <i>critical</i> thought. A six-month-old human innately reflects and learns; AI is just autocorrect on steroids, built on top of a fraction of the data that passes through an average human during a day.</p>

<p>The more humans use a sycophantic bullshit generator, the more they succumb to its allure. This is natural in <a href="https://garden.pivic.com/concepts/bullshit">bullshit</a>.</p>

<p>Amazon now <a href="https://x.com/lukolejnik/status/2031257644724342957/?rw_tt_thread=True">mandate AI-generated code as used by junior and mid-level programmers to be reviewed before it crashes their own systems</a>.</p>

<p>Microsoft own LinkedIn. The <a href="https://www.wheresyoured.at/the-era-of-the-business-idiot/">Microsoft CEO claims to only use AI chatbots instead of reading email</a>, which should result in getting fired.</p>

<p>Alas, here we are. AI has its uses, but is rarely worth it, mainly because a single AI interaction takes a monumental toll on the climate and often results in erroneous results.</p>

<p>If people would ask strangers on the street about certain things, they might get wrong answers. Would those people be sycophants and liars? Maybe, but it&#39;s not likely that they would make the repetitive and idiot-like &#39;mistakes&#39; that are made by popular AI chatbots that are trained on mainly stolen data.</p>

<p>Would you befriend AI?</p>

<p><a href="https://thoughts.pivic.com/tag:ArtificialIntelligence" class="hashtag"><span>#</span><span class="p-category">ArtificialIntelligence</span></a></p>
]]></content:encoded>
      <guid>https://thoughts.pivic.com/ai-and-turning-off-thought</guid>
      <pubDate>Fri, 20 Mar 2026 06:18:21 +0100</pubDate>
    </item>
    <item>
      <title>Sloperator</title>
      <link>https://thoughts.pivic.com/sloperator</link>
      <description>&lt;![CDATA[Sloperator&#xA;&#xA;Merriam-Webster&#39;s word of the year in 2025 is &#39;slop&#39;. One definition:&#xA;&#xA;  digital content of low quality that is produced usually in quantity by means of artificial intelligence&#xA;&#xA;I think it&#39;s safe to say nobody sees AI-generated content as good, at least not from a high level. To constantly need to nanny what&#39;s basically autocorrect on steroids is a horrific user experience. Also, because AI is built on stolen copy, doesn&#39;t produce profits, and rapidly accelerates the climate catastrophe...what&#39;s there to like? Badly composed images? Computer code that imposes enormous security risks and isn&#39;t really built for maintenance?&#xA;&#xA;AI doesn&#39;t even produce consistent results. Consistency is something we expect from AI to be used on a professional level, yet that&#39;s impossible to achieve.&#xA;&#xA;Read this article on what AI can and can&#39;t do.&#xA;&#xA;Don&#39;t be a sloperator.&#xA;&#xA;If anything, AI should be used to replace the bullshitting managers who claim to love AI. Oh, bullshit:&#xA;&#xA;I keep a garden page about bullshit.&#xA;&#xA;ArtificialIntelligence]]&gt;</description>
      <content:encoded><![CDATA[<p><strong>Sloperator</strong></p>

<p><img src="https://files.catbox.moe/k4c664.webp" alt=""></p>

<p>Merriam-Webster&#39;s word of the year in 2025 is &#39;<a href="https://www.merriam-webster.com/dictionary/slop">slop</a>&#39;. One definition:</p>

<blockquote><p>digital content of low quality that is produced usually in quantity by means of artificial intelligence</p></blockquote>

<p>I think it&#39;s safe to say nobody sees AI-generated content as good, at least not from a high level. To constantly need to nanny what&#39;s basically autocorrect on steroids is a horrific user experience. Also, because AI is built on stolen copy, doesn&#39;t produce profits, and rapidly accelerates the climate catastrophe...what&#39;s there to like? Badly composed images? Computer code that imposes enormous security risks and isn&#39;t really built for maintenance?</p>

<p>AI doesn&#39;t even produce consistent results. Consistency is something we expect from AI to be used on a professional level, yet that&#39;s impossible to achieve.</p>

<p>Read <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">this article</a> on what AI can and can&#39;t do.</p>

<p>Don&#39;t be a sloperator.</p>

<p>If anything, AI should be used to replace the bullshitting managers who claim to love AI. Oh, <a href="https://en.wikipedia.org/wiki/On_Bullshit">bullshit</a>:</p>

<p>I keep <a href="https://garden.pivic.com/concepts/bullshit/">a garden page about bullshit</a>.</p>

<p><a href="https://thoughts.pivic.com/tag:ArtificialIntelligence" class="hashtag"><span>#</span><span class="p-category">ArtificialIntelligence</span></a></p>
]]></content:encoded>
      <guid>https://thoughts.pivic.com/sloperator</guid>
      <pubDate>Wed, 17 Dec 2025 11:46:45 +0100</pubDate>
    </item>
  </channel>
</rss>