all repos — h3rald @ a6316bdf97e16785323d0e162802d015536f8183

The sources of https://h3rald.com

contents/articles/gen-ai.md

 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
-----
id: gen-ai
draft: true
title: "A Tale of Hype and Fallacy"
subtitle: "Reflections on Generative AI, sparkles, and... washing machines"
content-type: article
timestamp: 1737926162
-----

I don't write very often these days. But if and when I do though, it should be about something meaningful, so than in [ten or twenty years](/articles/twenty-years/) time I will be able to say "Oh wow, that thing really felt important back then, and it isn't now" or "I didn't really understand the magnitude of that at the time".

That's why I have decided that I should really write about Artificial Intelligence. Not the _real thing_ (AGI) that we will probably have twenty years from now, just the dumb-but-sometimes-useful surrogate that is _generative_ AI, that we have today, in 2025.

### Not your average positronic brain

Back in 1987 (nearly FORTY years ago, GOD I am old!), when Star Trek: The Next Generation premiered, they introduced the character of Lt. Cmdr. Data, a sentient android, portrayed by the legendary Brent Spiner. That character was probably thought originally as a sort of replacement for Mr. Spock from TOS, being primarily driven by logic, but with a _twist_: even though it couldn't understand basic jokes, or even speak using word contractions, Data longed to become human, above all else. 
And his positronic brain, in all its sophistication, was not able to feel emotions (he eventually got an upgrade, but that's another story). Not only that, he was struggling to even _simulate_ them, and was incapable of lying.

Today's LLMs are nothing like Data. Our poor man's AI is a powerful set of algorithms that not only is able to use contractions and even effectively mimic the writing style of humans, it is _frequently_ wrong, it hallucinates, and it has a natural talent to deceive humans. It can also effectively simulate creativity, even paint you a brand new picture in less than a second!

<figure>
	<img src="/images/ai/ai-world-domination.webp" alt="An AI-generated image of AI taking over the world" />
	<figcaption>Create an image representing Artificial Intelligence conquering the world and the internet. Overemphasize the technology side of it and draw your inspiration from popular science fiction.</figcaption>
</figure>

What about quality, you ask? Ehhh we are working on it. What about reliability, can it actually... Pffft, nevermind. 

It turns out, that our so-called (generative) AI it is a lot more approximate, imperfect, and... maybe even _human_ that Data ever was. Is that a good thing? Meh, I guess probably not: personally I would rather have something reliable that tells me "Sorry, cannot compute", rather than this kissass know-it-all that keeps saying "Certainly, I can do that." &mdash;, not only it tells you it can do almost whatever you ask, it also corroborates it with a sufficient amount of plausible bullshit at times, that if you are _genuinely_ relying on it because you _actually_ didn't know anything about what you asked, it may take you hours or even days to realize that that overly-polite artificial nimrod actually knew _fuck all_ about what you asked, and just _made it up_ within seconds, tapping from an ever-growing amount of data which may or may not be correct in the first place.

DON'T

TRUST

(generative) AI

No really, just don't. Tell your children, your grandmothers, all your loved ones that Chat-thingie or whatever crap it's called should not be trusted more than a used car salesman, _at best_ (it's a figure of speech, I actually met a few pretty decent used-car salesmen actually).

### It is actually pretty good (at certain things)

Assuming you are not dumb enough to trust them with your life, your job, or anything important, LLMs are undeniably one of the greatest inventions of the decade. Despite that it changed forever our imaginary when it comes to AI and killed 90% of science fiction literature with it, these things are pretty damn impressive, for certain things.

Things like:
- Understanding written text
- Translating text into other languages
- Correcting grammar, spelling
- Writing boilerplate code
- Lightweight research and quick summaries

These are some of the things Generative AI is good at. Often really good at it, and if you don't overdo it, it may even make you actually more productive.

An [interesting article](https://seldo.com/posts/what-ive-learned-about-writing-ai-apps-so-far) I came across recently summarizes it quite well:

> "LLMs are good at transforming text into less text"

The above quote should be turned into mobile and desktop wallpapers, printed on billboards, and tattooed on the forehead of execs.

Perhaps it can be a bit of an oversimplification, but again, it doesn't say that they are "only" good at that, right? I think it's a good and safe rule of thumb, for when you are in doubt.

### Tutti mi vogliono, tutti mi cercano

Everyone wants AI. It's like a must have or you are not cool enough. Months ago we changed our washing machine, and we got the best one we have ever had. Honest. Forgive the sexism, but you could call it _husband-proof_: you turn it on, you turn the knob to your desired program, and press play. Simple enough:

<figure>
<img src="/images/ai/washing-machine-ai.gif" alt="An AI-enabled washing machine" />
<figcaption>
Turning on my washing machine
</figcaption>
</figure>

So you turn it on, it says "Hello", followed by _Optimizing with AI_, and then... it comes up with a program to run that, based on God knows what, is meant to be what I *should* be running at the time (side note: this washing machine can obviously be connected to the internet... and if it rains at your location it can kindly remind you that you may not be able to dry your clothes). Anyhow, every single time, the suggested program is not what I want. Every time I just turn the knob to the right once, and select "Cloudy Day". Funny thing is that underneath it says _most-frequently used cycle_. So it keeps track of how often I use a program, then WHY, pray tell, you have to make me wait a second and a bit to show a pointless message about AI followed by a program that (statistically) I am not going to run?

Marketing, probably. There seems to be an urge to advertise the fact that something is _powered by AI_ and that something else is _enhanced with AI_. Because (for now), it sells, and marketers, executives and alike are trying to capitalize this for extra profit. Sometimes even when there's no AI involved at all, or when AI's contribution is nothing but marginal.

Junior engineers are the worst. A while back I was asked by a manager to suggest some ideas for a local mini-hackathon. I always have a list for this sort of things and I gave him a few ideas.

_Thanks but... I was thinking, what about anything related to AI? Because you know, everyone wants to do something with AI these days_

What the actually f***. I mean _seriously_. I really struggled the urge to quote [one of the most insightful rants](https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/) on AI I have ever read.

The one thing that is more annoying than AI, is people desperately trying to shove it down your throat.

### Generative AI is not a product

As a technical product manager myself, I have to frequently deal with AI-generated hype, at all levels: executives want you to put AI in your products, and so do engineers it seems. Everytime anybody comes to me with a proposal to use AI to do something, I ask myself the following questions:

* Does AI provide any real _value_ to the user?
* Is AI actually making things _faster_ or _easier_ for the user?
* Can I replace the AI integration with something that is _faster_ or _easier_ for the user?
* Is AI solving just one specific aspect of the problem or can it be used to solve the problem _holistically_?

Unless the answer to at least one of those question is a resounding _yes_, then you have a problem.

You really cannot sell generative AI by itself: you need a problem to solve, and you need to solve it in a way that actually makes things easier or faster for the user. AI _can_ be a productivity boost, but it cannot replace the product itself.

Let's go through a couple of examples. 

As part of a mini hackathon, one of the engineers desperately wanted to showcase how they managed to integrate AI into documentation search. I was expecting the usual _here's a summary of the results in a blurb_ kind of thing, but it wasn't even that. Here goes:

Engineer: _"So, I am integrating AI into our search command. See, from the command line I am doing a search on our documentation,I fetch the first three results, and then I send them to AI and..."_

Me: _"...And with AI you are determining the most relevant of those results, i.e. what the user actually needs?"_

Engineer: _"No, see, AI then gives me back the same results, in the same order, but see... it shows a better title, and it provides three lines of mini-summary..."_

Me: _"So you are saying that AI is actully _changing the page title_ and providing a sort of preview of the content. But... would the user really want that, or would the user just be happy with seeing the _same_ page title and the search-highlighting provided natively by the existing fulltext search?"_ 

Next...

Another example was a pseudo Copilot thing that was able to process a prompt asking to add an icon indicator to a specific object type. As always, I was given the usual demo, and the results were pretty good.

Me: _"OK, can I then use this copilot thingie to implement other customizations? Like for example [...]"_

Engineer: _[Starts talking on how they trained this with x amount of specific data to produce accurate results for that specific use case...]_

Me: _"How is this different from having a command line or a UI wizard populating a pre-done template then?"_

Engineer: _"Well, see, the user can just ask the Copilot to..."_

No. Asking our users to type in a prompt instead of pushing buttons in a UI to do the same thing is not _cool_. It's a _colossal waste of time_. If someone comes to you and says that AI can make your users more productive so that they don't need to know how to use your user interface anymore... you have a much bigger problem: perhaps, your user interface _just sucks_ and you simply have to fix that.

But no. Some people just want the sparkles ✨. Add a ✨ in your interface, show a Copilot-like panel, and your product will sell. No. That may have been true months ago, but users are getting smarter. Some users (like me) are getting so used to different AIs to hallucinate so bad, that they systematically avoid pressing ✨ buttons. 

### De rerum _novissmarum_

On May 8th, the (Catholic) world rejoiced at the announcement by protodeacon Dominique Mamberti that we had a new Pope:

> Annuntio vobis gaudium magnum:
> HABEMUS PAPAM
> Eminentissimum ac reverendissimum Dominum Robertum Franciscum
> Sanctae Romane Ecclesiae Cardinalem Prevost
> qui sibi nomen imposuit **Leo XIV**

The fact that the College of Cardinals elected the first US-born Pope in history was surely big news. But the even _bigger_ news was the name Cardinal Prevost chose for himself as a Pope: _Leo XIV_. Catholics are well aware that the name of a Pope is loaded with deep meaning, and it often gives the world a clear indication of the Pope's views on certain things. If that wasn't clear enough, Pope Leo XIV spellt it out to the _College of Cardinals_ just two days after his election:

> There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution. 
> In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to **developments in the field of artificial intelligence** that pose new challenges for the defence of human dignity, justice and labour.

-- [Pope Leo XIV names AI one of the reasons for his papal name, The Verge](https://www.theverge.com/news/664719/pope-leo-xiv-artificial-intelligence-concerns)

That is big news. It's not that the Catholic Church is going to excommunicate people for using ChatGPT because it's the eevil or something, but it is going to probably spend a significant amount of time helping people how to deal with AI. I find the fact that it is being compared to the industrial revolution as incredibly fitting. I am willing to bet that our new Pope will be writing an encyclical about the role of generative AI in our sociaty, like his namesake did at the time of the industrial revolution.

While we wait, there's actually a very interesting (and very, VERY long) essay written by... the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education that already provides a rather in-depth analysis of artificial intelligence and its present and future impact on society: [Note on the Relationship Between Artificial Intelligence and Human Intelligence](https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html).

Overall, this is an excellent read on the subject, whether you are Catholic or not. It does a good job of acknowledging the power that AI can have on society, and in improving our lives, but it does an even better job in pinpointing the inherent problems associated to it.

The key point is that AI _"should be used to contribute to human development and the common good"_, and that should _"assist, not replace, human judgment"_. 

It then goes on to discussing the choice of the word _intelligence_ for this technology. Quoting Saint Thomas Aquinas, who defined intelligence as the union of reason and intellect, where _"the term intellect is inferred from the inward grasp of the truth, while the name reason is taken from the inquisitive and discursive process"_.

It then quickly introduces the concept of _human_ intelligence, and that _"a proper understanding of human intelligence, therefore, cannot be reduced to the mere acquisition of facts or the ability to perform specific tasks"_. And finally points out the inherent limitation of current generative AI: _"even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh."_


--> continue from here


Body and soul - AI is missing "body" to experience sensory inputs firsthand 


"AI cannot currently replicate moral discernment or the ability to establish authentic relationships"

"the very use of the word ‘intelligence’” in connection with AI “can prove misleading” and risks overlooking what is most precious in the human person" Pope Francis

Accountability: difficult to understand why wrong 

"those using AI should be careful not to become overly dependent on it for their decision-making, a trend that increases contemporary society’s already high reliance on technology."

"Generative AI can produce text, speech, images, and other advanced outputs that are usually associated with human beings. Yet, it must be understood for what it is: a tool, not a person."

"The need to keep up with the pace of technology can erode workers’ sense of agency and stifle the innovative abilities they are expected to bring to their work."

Overall, Church warns against human/machine interaction 

Education:
- AI also presents a serious risk of generating manipulated content and false information, which can easily mislead people due to its resemblance to the truth.
- We cannot allow algorithms to limit or condition respect for human dignity, or to exclude compassion, mercy, forgiveness, and above all, the hope that people are able to change

Impact on environment:
AI can support sustainable agriculture, optimize energy usage, and provide early warning systems for public health emergencies. These advancements have the potential to strengthen resilience against climate-related challenges and promote more sustainable development.
Considering the heavy toll these technologies take on the environment, it is vital to develop sustainable solutions that reduce their impact on our common home

Warfare:
Like any tool, AI is an extension of human power, and while its future capabilities are unpredictable, humanity’s past actions provide clear warnings. The atrocities committed throughout history are enough to raise deep concerns about the potential abuses of AI.

Conclusions:
with an increase in human power comes a broadening of responsibility on the part of individuals and communities.
[Paul VI in 1965 - Pastoral Constitution on the Church in the Modern World](https://www.vatican.va/archive/hist_councils/ii_vatican_council/documents/vat-ii_const_19651207_gaudium-et-spes_en.html)

AI should be used only as a tool to complement human intelligence rather than replace its richness

One must go beyond the mere accumulation of data and strive to achieve true wisdom.

In a world marked by AI, we need the grace of the Holy Spirit, who “enables us to look at things with God’s eyes, to see connections, situations, events and to uncover their real meaning.”

---

### Bulls*it machines

### Vibe coding

## Fuzzyness

## A reliability problem 

## Agents

[](https://cendyne.dev/posts/2025-03-19-vibe-coding-vs-reality.html)


-------

Notes:

- apple and ms didn't get it right 

- Sparkles emoji
- artisan engineers
- my law: Effectivity of AI is directly proportional to the intelligence of its user


[](https://thebullshitmachines.com)
Informative, lessons
Predictive text, autocomplete
Can't reason
Don't know truth
Guesswork

"Brandolini's Law: The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it."

Mimicking language of specific domains
Talk to computers in a natural way
Translate natural language to...
Cannot be debugged, unpredictable
Anthropoglossic machine
Unable to explain why they did/say something 
Deepfakes, scams, voice+video
Verify AI response, ask for pointers, sources, low-stake decisions
Bad at info retrieval, good as starting point, no sourcing 
Plagiarism unless used for proofreading 
Mediocre prose, no opportunity to think
Kills the art of reading 

"LLMs can help skilled coders work faster and more effectively. But it's critical that the people using them (1) begin with good foundation in programming, (2) understand how an LLM can help them, and (3) have practice programming with LLM assistance."

Tempting shortcuts, kill creativity
A way to get rid of bullshit work
FOMO
AI training on AI data
Generate artifacts (or simulacra)



[](https://www.tomsguide.com/ai/apple-intelligence/apple-faces-criticism-after-shockingly-bad-apple-intelligence-headline-errors)

[](https://www.zdnet.com/home-and-office/work-life/the-microsoft-365-copilot-launch-was-a-total-disaster/)

[](https://arstechnica.com/ai/2025/01/openai-launches-operator-an-ai-agent-that-can-operate-your-computer/) 

[](https://www.dair-institute.org/blog/letter-statement-March2023/)

[](https://www.techpolicy.press/challenging-the-myths-of-generative-ai/)

[](https://notbyai.fyi)

[](https://github.com/ai-robots-txt/ai.robots.txt)

[](https://maggieappleton.com/ai-dark-forest)

[](https://smolweb.org)

[](https://stratechery.com/2025/deepseek-faq/)

[](https://newsletter.languagemodels.co/p/the-illustrated-deepseek-r1)

[](https://www.bbc.com/news/articles/c5yv5976z9po)

[](https://nmn.gl/blog/ai-illiterate-programmers)

[](https://www.platformer.news/openai-chatgpt-mental-health-well-being/)