-
Notifications
You must be signed in to change notification settings - Fork 15
Appendix
Table of Contents
- Steering in Practice
- Challenges in Steering
3. Steering in Practice
Computer-controlled autonomous characters (also called autonomous agents, autonomous pedestrians, etc.) are a vital part of a living, breathing virtual world. Over the past fifteen years, the quality of these characters have improved and a standard generic process has emerged: Animations are done using an artist's hard work and/or motion capture, and the intelligence to control the rich set of animations is either pre-scripted or manually programmed with carefully tuned state machines. This general approach is true in movies, games, and other virtual worlds. These days, even facial animations, high-quality rendering, rag-doll physics and more can all be done in real-time, allowing for a rich variety of behaviors. One unique challenge when creating believable autonomous characters is their ability to steer through the environment, including navigation and locomotion. Historically, it was exciting enough to see characters populating a virtual environment at all, regardless of their steering intelligence. Now, however, steering is becoming an increasingly important part of autonomous characters in virtual worlds, because it is the most prevalent aspect of the characters that users see. While most other aspects of the character can be convincingly realistic because of pre-computed data or pre-scripted events, steering is unique because it usually cannot rely on precomputed data like animations, but it also does not easily fit the state-machine approach for modeling intelligence. Steering has remained to be a challenge in practice, even in state of the art virtual worlds, like Grand Theft Auto 4 or Sims 3.
Creating a believable steering algorithm has many challenges. The following list has just a few of these challenges, with a slight focus on pedestrian steering. We welcome any feedback, discussion, opinions, and even disagreement on this topic. Please post such thoughts to the SteerSuite discussion group. The space of possible steering decisions is dense, continuous, and extremely large, making it impractical to try making a steering algorithm that evaluates every possible choice before making a decision. Greedy/heuristic style decisions are generally not sufficient to capture the complex intelligence in human-like steering. In many cases, the natural realistic thing to do is backtrack (or side-step, step backwards), essentially creating a sub-goal that does not obviously help progress to the final goal. In other words, the solution space is highly non-linear with many incorrect local minima/maxima. Real humans steer in many different ways at the same time, by following a plan, by making predictions, by reacting, and by subtle unspoken communications and social conventions, and it is not always clear how to combine or prioritize these aspects. Agents may need to steer uniquely depending on personality, adding another layer of complexity that needs to be compatible with the steering algorithm. Believable macroscopic crowd phenomena should emerge in dense, crowded environments, such as lane formation and bottleneck cooperation. Steering is inherently integrated with an agent's locomotion capabilities. This property simultaneously enhances and limits the ways agents can steer.