Smooth vision-based walking is a fundamental problem in legged locomotion. This paper describes a vision guided virtual (biped) walking machine --- ViGWaM --- which is emulated with hardware-in-the-loop stereo vision components. The image processing architecture and reconstruction algorithms for obstacle recognition and localization are presented. It is shown how perception results are used for online step sequence adaptation by concatenation of various offline computed step primitives. The closed-loop controlled ViGWaM is visualized in an augmented reality display. An experiment demonstrates the performance of the described vision-based guidance approach in a prototype walking scenario.