I just wanted to see if anyone has attempted to leverage neural network models to aid in botting?
While looking around, it seems the vast majority of bots attempt to leverage lua unlockers to expose the wow API. But this in of itself has led to a game of cat and mouse between blizzard and botters. Blizzard makes alterations to warden/the game client, and botters have to identify these changes and patch around them to stay hidden.
But what if you didn't need to go to this depth? In the modern day of machine learning (and library's like tensorflow and pytorch), why can't bot leverage the same interface the player does?
I have a few smaller-scale models I have already created and trained, such as ones to detect the player's HP/Mana, if the player is casting and if the player is dead. I intend to move on to target detection next. But I just want wanted to see if anything similar has been done.
In theory, once a series of models have been created, they would effectively act as getters from the WOW API without needing to touch it. For the game's interaction, I already have some trivial code to attach to the game window and send keyboard/mouse input. But I do intend to navigate the game similar to how bots do by parsing the games map files, drawing routes and using path finding algorithms based on the games coordinates (but these will also be taken by extracting them off the screen and using OCR or a local model)