Nina Gierasimczuk: The Dynamics of True Belief – Learning by Revision and Merge

Affiliation: Technical University of Denmark

Personal Website:

Successful learning can be understood as convergence to true beliefs. What makes a belief revision method a good learning method? In artificial intelligence and knowledge representation, belief revision processes are interpretable on epistemic models – graphs representing uncertainty and preference. I will discuss their properties, focusing especially on their learning power. Three popular methods: conditioning, lexicographic revision, and minimal revision differ with respect to their learning power – the first two can drive universal learning mechanisms, while minimal revision cannot. Learning in the presence of noise and errors further complicates the situation. Various types of cognitive bias can be abstractly represented as constraints on the graph-based belief revision, we can then rigorously show ways in which they impact truth-tracking. Similar questions can be studied in the context of multi-agent belief revision, where a group revises their collective conjectures via a combination of belief revision and belief merge. The main take-away is that rationality of belief revision, both on individual and collective level, should account for learning understood not only as adaptation, but also as truth-tracking.



Henrik Müller: What’s in a story? How narratives structure the way we think about the economy

Affiliation: TU Dortmund University

Personal Website:

Telling narratives is the mode in which humans make sense of an otherwise incomprehensibly complex world. Societies run on a set of narratives that serve as short-hand descriptions of the state of the nation. These stories shape expectations and drive economic and policy decision. Journalism is a key player in shaping economic narratives. Furthermore, it adds an additional approach to economists’ reasoning: narratives can be valuable complements to the statistics-focused approach pursued by economists, particularly in times of substantial structural change, when high levels of uncertainty prevail. What’s more, modern text mining approaches lend themselves to detecting and quantifying the salience of narratives.



Camille Roth: Semantic graphs and social networks

Affiliation: French National Centre for Scientific Research, Centre Marc Bloch

Personal Website:

The social distribution of information and the structure of social interactions are more and more frequently studied together, especially in fields related to computational social sciences. On the one hand, content analysis, variously called “text mining”, “automated text analysis” or “text-as-data methods”, relies on a wide range of techniques from simple numerical statistics (textual similarity, salient terms) to machine learning approaches applied at the level of sets of words or sentences, in particular to extract various types of semantic graphs – whether they are simple co-occurrence links between terms, “subject-predicate-object” triples, or more elaborate structures at the level of an entire sentence. These data and, sometimes, these semantic graphs, are also associated with actors whose various relations (interaction, collaboration, affiliation) are also frequently gathered in social graphs. This presentation aims at proposing an overview of approaches mixing contents and interactions, where digital public spaces and scientific communities represent frequent empirical grounds, being social systems where information and knowledge are produced and propagated in a decentralized way.