We present a data-driven method for the real-time synthesis of believable steering behaviours for virtual crowds. The proposed method interlinks the input examples into a structure we call the perception-action graph (PAG) which can be used at run-time to efficiently synthesize believable virtual crowds. A virtual character's state is encoded using a temporal representation, the Temporal Perception Pattern (TPP). The graph nodes store groups of similar TPPs whereas edges connecting the nodes store actions (trajectories) that were partially responsible for the transformation between the TPPs. The proposed method is being tested on various scenarios using different input data and compared against a nearest neighbours approach which is commonly employed in other data-driven crowd simulation systems. The results show up to an order of magnitude speed-up with similar or better simulation quality.