My boyfriend was just telling me about a recent cover story in New Scientist magazine, which had a provocative title about remembering the future, but apparently just said something about how the brain processes information about the future and about the past in similar ways. At any rate, it stimulated a conversation about what it would be like if there were beings that “remembered” the future the way that we remember the past. After trying to figure out whether this would necessarily involve them experiencing things in reverse order, I at least came up with an argument that we wouldn’t really be able to effectively communicate with these beings (at least given some aspects of certain theories of conversation). This is true even bracketing any concerns about strange thermodynamic properties of these systems, or concerns about being able to produce sensible utterances that individually have meaning to both these speakers and us.
On some standard models of conversation (with Stalnaker’s, I believe, as a paradigm of some sort), the state of a conversation can be represented by some set of information that is in the common ground, together with the separate information states of the two speakers. (Let’s assume there’s only two speakers for convenience.) Whenever someone utters something, this updates the common ground, and presumably also updates the other speaker’s beliefs.
However, if one conversational participant remembers only the past utterances in the conversation, while the other “remembers” only the future utterances, then you get a bit of a problem. It no longer makes sense to model the common ground as including all earlier utterances (and their logical consequences), for just the same reason that we don’t model it as containing all later utterances. So in some sense, the common ground shouldn’t contain any utterances at all from this conversation – it can only include the background presuppositions. So the conversational context never changes. But this seems to make individual utterances pointless then, if their point is to change the conversational context.
Of course, we might be able to get information from one another, but it seems like it wouldn’t work like any sort of conversation, or probably even through intentionally saying things to one another. It would have to just be through observation.
Another interesting side note we realized in discussing this is that there’s a sense in which intention is the natural future-directed counterpart of memory, rather than prediction. After all, intentions are the mental states that have direct causal connections to future events, while predictions generally involve some sort of indirect connection involving previous past events. (Let’s leave aside self-fulfilling prophecies, like the things the Oracle telling all those people in Greek myths.) Of course, a far higher proportion of intentions end up not connected to the appropriate action than is the case for memories and the appropriate experience. That is, it’s much easier to have an intention that you don’t carry out than it is to have a false memory (though both are certainly possible).