Time: Wednesday 21.4.2004, 14:30
Place: Room A4-106, Fr. Bajersvej 7
Probabilistic reasoning has been an active field of research in
Artificial Intelligence for about 25 years. For much of this time
the focus was on what might be termed propositional probabilistic
models: probability distributions on propositional interpretations.
More recently, numerous efforts have been made to also develop semantic
foundations and representation languages for richer types of models
that also incorporate reasoning capabilities more closely related
to predicate logic.
The "relational Bayesian network" language represents one approach
to first-order probabilistic reasoning. In this talk I will introduce
syntax and semantics of this language. It will be shown how
relational Bayesian networks incorporate such classical types
of probabilistic models as Markov chains and random graphs. Several
examples will illustrate the broad applicability of this language
also to modeling problems outside the core domain of Artificial
Intelligence. Finally, I will show how relational Bayesian networks
provide a new approach to convergence laws in finite model theory.