Are There Stable Ethical Postures Inside LLMs? Role-play Prompting and Stochastic Ethics
Pubblicato 2026-02-10
Parole chiave
- Stochastic ethics
Come citare
Abstract
This paper explores whether large language models (LLMs) can sustain stable moral postures or whether their apparent
ethical coherence is merely stochastic. Through a series of zero-shot role-playing prompts combining specific topics and
simulated personas, this study31 analyzes linguistic patterns using topic modeling, TF-IDF differentiation, log-likelihood
(G²), and measures of semantic convergence and entropy. The results show no enduring moral orientation: the models do not
“choose,” but statistically recombine fragments of moral discourse inherited from their training corpora. What emerges is
a stochastic ethics—an ethics without intentionality, coherence, or agency, yet capable of reflecting human moral structures
in probabilistic form. Interpreted within the philosophical framework of moral freedom and the infosphere, the study
argues that LLMs act not as moral subjects but as amplifiers of human ethical language, redistributing the moral imaginary
of the societies that produce and employ them.