Goal-oriented vision-based legged locomotion requires a high degree of coordination between perception and walking. The way how this coordination can be established remains a fundamental and rarely studied problem in legged robotics. Our investigations into this field are outlined in this article by presenting recent results in three major research directions. The approaches developed are experimentally validated in a rapid prototyping environment comprising real vision components and a dynamically simulated walking machine. It is shown how perception results are employed for step sequence adaptation in the closed-loop controlled walking machine visualized in an augmented reality display.