We are taught in introductory logic the rudiments of model theory: sentences are to be interpreted with references to a domain of discourse that is divided up into sets, sets of pairs, sets of triples, etc. So we might have something like:
UD: a, b, c
Fx = {a, b}
Gx = {b, c}
Rxy = {{a, c}}
Given this model, "everything is F" turns out to be false; "something is F" is true, "something stands in relation R to b" is false, and so on.
I have been thinking about all of this is relation to metaphysics and truthmaking. I tend to think of metaphysics as the business of building models of the world, very general pictures of what things are like. It is hard to observationally confirm such models because of their extreme generality, but I don't think that contradicts the idea that metaphysicians are model builders.
Still, though observational confirmation is often hard to imagine, there are some constraints on the modelling process. For example, suppose the simple model above is a metaphysical theory. In that case, certain claims simply cannot be supported: they turn out to be false because they are incompatible with the model. Then, the model can be compared with other theories, including scientific ones, and even observations. Certain work can be done.
This all seems reasonable to me. But then, isn't something like this the thought at the heart of the truthmaker principle that truth supervenes on being? That is, it is inconsistent to present a model that says that P is true if there is nothing in the model to support the truth of P. There may be wrinkles to iron out here, but it seems to me this is reasonable and, therefore, something like the truthmaker principle is bound to survive.
Curious what others think.
No comments:
Post a Comment