Greetings.
OK, here's the issue I'm having (and I hope I'm posting in the right thread). After reading about the Iraq war vet who was able to play World of Warcraft without sight, and myself being both a software engineer and blind, it occurred to me that, if a bot can play World of Warcraft with no human intelligence behind it, there doesn't seem a reason that something couldn't run in- or out-of-process to speak texts, give object info, etc. as needed (let's forego questions like why a blind person would want to play World of Warcraft, how a blind person can program a computer, or how a blind person knows when to stop wiping (Google this. It's come up.), as I have neither time nor inclination to answer them here).
One thing that's got me puzzled though is navigation. Drawing on my own experience, if I were navigating in an environment, I only need to know information a certain distance ahead of me (if I'm walking from the light rail station to my office, an open manhole 15 miles off isn't going to mean much to me). If I were writing an AI, I'd obviously use NavMeshes to determine where it's safe to walk--but I'm also handling movement if I do that. For this project, I'm assuming that the player is still going to maintain a certain amount of control over … um … the player, and will thus only need to know whether or not, if the current trajectory is maintained, Something Bad™ will happen. Will I need to distribute NavMeshes with this, or can I do this through some other means? Am I more concerned than I need to be about the added bloat of nav data?