all repos — h3rald @ b682b092bbde974d3673e0e6c5fb36427ed33379

The sources of https://h3rald.com

contents/articles/ai.md

 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
-----
id: ai
draft: true
title: "A Tale of Hype and Boredom"
subtitle: "Reflection on AI, sparkles, and washing machines"
content-type: article
timestamp: 1737926162
-----

I don't write very often these days. But if and when I do though, it should be about something meaningful, so than in [ten or twenty years](/articles/twenty-years/) time I will be able to say "Oh wow, that thing really felt important back then, and it isn't now" or "I didn't really understand the magnitude of that at the time".

That's why I have decided that I should really write about Artificial Intelligence. Not the _real thing_ (AGI) that we will probably have twenty years from now, just the dumb-but-sometimes-useful surrogate that is _generative_ AI, that we have today, in 2025.

## Not your average positronic brain

Back in 1987 (nearly FORTY years ago, GOD I am old!), when Star Trek: The Next Generation premiered, they introduced the character of Lt. Cmdr. Data, a sentient android, portrayed by the legendary Brent Spiner. That character was probably thought originally as a sort of replacement for Mr. Spock from TOS, being primarily driven by logic, but with a _twist_: even though it couldn't understand basic jokes, or even speak using word contractions, Data longed to become human, above all else. 
And his positronic brain, in all its sophistication, was not able to feel emotions (he eventually got an upgrade, but that's another story). Not only that, he was struggling to even _simulate_ them, and was incapable of lying.

Today's LLMs are nothing like Data. Our poor man's AI is a powerful set of algorithms that not only is able to use contractions and even effectively mimic the writing style of humans, it is _frequently_ wrong, it hallucinates, and it has a natural talent to deceive humans. It can also effectively simulate creativity, even paint you a brand new picture in less than a second!

<figure>
	<img src="/images/ai/ai-world-domination.webp" alt="An AI-generated image of AI taking over the world" />
	<figcaption>Create an image representing Artificial Intelligence conquering the world and the internet. Overemphasize the technology side of it and draw your inspiration from popular science fiction.</figcaption>
<figure>

What about quality, you ask? Ehhh we are working on it. What about reliability, can it actually... Pffft, nevermind. 

It turns out, that our so-called (generative) AI it is a lot more approximate, imperfect, and... maybe even _human_ that Data ever was. Is that a good thing? Meh, I guess probably not: personally I would rather have something reliable that tells me "Sorry, cannot compute", rather than this kissass know-it-all that keeps saying "Certainly, I can do that." &mdash;, not only it tells you it can do almost whatever you ask, it also corroborates it with a sufficient amount of plausible bullshit at times, that if you are _genuinely_ relying on it because you _actually_ didn't know anything about what you asked, it may take you hours or even days to realize that that overly-polite artificial nimrod actually knew _fuck all_ about what you asked, and just _made it up_ within seconds, tapping from an ever-growing amount of data which may or may not be correct in the first place.

DON'T

TRUST

(generative) AI

No really, just don't. Tell your children, your grandmothers, all your loved ones that Chat-thingie or whatever crap it's called should not be trusted more than a used car salesman, _at best_ (it's a figure of speech, I actually met a few pretty decent used-car salesmen actually).

## It is actually pretty good (at certain things)

Assuming you are not dumb enough to trust them with your life, your job, or anything important, LLMs are undeniably one of the greatest inventions of the decade. Despite that it changed forever our imaginary when it comes to AI and killed 90% of science fiction literature with it, these things are pretty damn impressive, for certain things.

Things like:
- Understanding written text
- Translating text into other languages
- Correcting grammar, spelling
- Writing boilerplate code
- Lightweight research and quick summaries

These are some of the things Generative AI is good at. Often really good at it, and if you don't overdo it, it may even make you actually more productive.

An [interesting article](https://seldo.com/posts/what-ive-learned-about-writing-ai-apps-so-far) I came across recently summarizes it quite well:

> "LLMs are good at transforming text into less text"

The above quote should be turned into mobile and desktop wallpapers, printed on billboards, and tattooed on the forehead of execs.

Perhaps it can be a bit of an oversimplification, but again, it doesn't say that they are "only" good at that, right? I think it's a good and safe rule of thumb, for when you are in doubt.

## Tutti mi vogliono, tutti mi cercano

Everyone wants AI. It's like a must have or you are not cool enough. Months ago we changed our washing machine, and we got the best one we have ever had. Honest. Forgive the sexism, but you could call it _husband-proof_: you turn it on, you turn the knob to your desired program, and press play. Simple enough:

<figure>
<img src="/images/ai/washing-machine-ai.gif" alt="An AI-enabled washing machine" />
<figcaption>
Turning on my washing machine
</figcaption>
</figure>

So you turn it on, it says "Hello", followed by _Optimizing with AI_, and then

Notes:
- washing machine 
- comparison with commander Data
- apple and ms didn't get it right 
- junior engineer wants ai
- ai for doc results example
- Sparkles emoji
- artisan engineers




[](https://www.tomsguide.com/ai/apple-intelligence/apple-faces-criticism-after-shockingly-bad-apple-intelligence-headline-errors)

[](https://www.zdnet.com/home-and-office/work-life/the-microsoft-365-copilot-launch-was-a-total-disaster/)

[](https://arstechnica.com/ai/2025/01/openai-launches-operator-an-ai-agent-that-can-operate-your-computer/) 

[](https://www.dair-institute.org/blog/letter-statement-March2023/)

[](https://www.techpolicy.press/challenging-the-myths-of-generative-ai/)

[](https://notbyai.fyi)

[](https://github.com/ai-robots-txt/ai.robots.txt)

[](https://maggieappleton.com/ai-dark-forest)

[](https://smolweb.org)





## Fuzzyness

## A reliability problem 

## Not a product