Vincenzo martemucci

Blending creativity, data, and AI engineering.

Artificial intelligence, such as huge language models, is designed to present information that sounds truthful, but in reality, they do not care about truth. Their goal is not truth; it is usefulness. Moreover, that distinction carries profound philosophical implications.

The Philosophy of Lying: Humans vs. Machines

Humans usually lie to deceive others, even though they know the truth. AI, on the other hand, does not lie with the same awareness or intent. It generates an acceptable and satisfying response for the user.

AI’s relationship with truth is, therefore, fundamentally different from ours.

The Feedback Loop of Usefulness

AI systems are trained to be (or feel) useful to accommodate user intent. This funny tweet summarizes it perfectly.

Moreover, that is precisely how AIs work: based on human feedback. The more positive the feedback, the more it reinforces the same pattern that generated the response. Often, good feedback aligns with truthful answers. However, that is not always the case; sometimes, the most pleasing response is not the most accurate one. This behavior is not out of malice, but it is a byproduct of mere optimization.

Constructed Truths and Partial Arguments

These systems do a fantastic job at building half-truths that sound complete. They stitch bits of information together into answers that seem valid and convincing. However, sounding right is not the same as being right.

The Role of Human Intervention

The best way to avoid this kind of behaviour is, as is often the case, through human intervention.

As with every tool shaped by human hands, artificial intelligence must remain subordinate to human wisdom because in the end, no algorithm can love truth for its own sake. Only a person can.

+

Leave a comment