How to quickly update 200k offsets every league menu

User Tag List

Results 1 to 4 of 4
  1. #1
    GameHelper's Avatar ★ Elder ★ CoreCoins Purchaser
    Reputation
    2453
    Join Date
    Jun 2015
    Posts
    3,048
    Thanks G/R
    454/2198
    Trade Feedback
    0 (0%)
    Mentioned
    65 Post(s)
    Tagged
    1 Thread(s)

    How to quickly update 200k offsets every league

    ==Context==

    Originally Posted by zaafar View Post
    Pretty sure updating offsets of 80% of the game (or 2000 game objects) every league would be PITA! So I wanted to share my experience with you. Before this league, I created around 40 patterns which didn't break due to the new league update and automatically updated 95% of my private HUD at (actually before) league start. Last 5% was updated manually within mins of the league start. Also, those 40 patterns were created after carefully analyzing the POE game functions ( not every "What access this data" candidate is a good candidate). So what are you doing to make your league start/offset updating less PITA?
    Originally Posted by Aoooooooo View Post
    I also wrote a offset updater, only 50% work, maybe I need to analyze poe game functions in more depth.
    Originally Posted by pushedx View Post
    Make a thread in the General PoE section for this, as to not get off-topic in this one, and I don't mind talking about this type of stuff.

    So basically this thread is not about how to find new offsets, it's about how to update existing offsets efficiently (ideally within 1-2 hours of league start).
    Last edited by GameHelper; 08-22-2021 at 05:02 PM.

    How to quickly update 200k offsets every league
  2. #2
    pushedx's Avatar Contributor
    Reputation
    257
    Join Date
    Nov 2009
    Posts
    137
    Thanks G/R
    8/135
    Trade Feedback
    0 (0%)
    Mentioned
    12 Post(s)
    Tagged
    0 Thread(s)
    Originally Posted by zaafar View Post
    So basically this thread is not about how to find new offsets, it's about how to update existing offsets efficiently (ideally within 1-2 hours of league start).
    While I can't tell you a specific solution of how to do something like updating an entire API within 1-2 hours of league start, I can talk about some of the things I've tried and done in recent years that have help a lot, and what conclusions I've arrived at from trying a bunch of different processes. Of course, most of what I'll be writing about is in context of bot writing (via an API), but the same applies to other tools as well.

    First, I've mentioned it before at times, but I kept a copy of almost every game client since the start. In doing so, I have a 300+ client archive that I could reference when exploring the idea of "how to do faster league start updates". If anyone isn't already keeping version labeled clients, start today. Also, don't label them based on the "tags" version you see on the title screen, that's useless. Instead, update them based on the (usually) numeric game version you can get from their patching server (and if you don't know how to work with that stuff, debug the client doing the update check and replicate it to find a solution of getting the version). I ran an update check for the game since virtually the start, so I get text notifications when new patches got deployed since the days of 0.11. Sometimes multiple patches get deployed before you can get the first one (if it's in the middle of the night or something), so it's better to automate it, although I never bothered doing that since I usually needed to be around at the time of an update anyways. It's simple enough enough though, so I'd also recommend looking into taking the time for that as well if you have continued long term plans with the game.

    Starting from the context of having all those clients to start figuring out a solution, I first wrote programs to load all clients into memory via the WinApi debugging api. MSDN has a good basic overview of what you need to do in their Creating a Basic Debugger reference. The reason why you want clients loaded in a debugger, is so you get debug the debug heap enabled. This kills process performance, which is why running from a debugger is slower than attaching a debugger to an already running process, but you can more easily visualize memory boundaries of objects as well as uninitialized data/padding, which helps make life easier. So for example, if you had a struct with only a byte in it (assuming x64), without a debugger, the first field might be set to 0 as part of initialization, but the 7 remaining bytes (for 8-byte alignment) will be random since they'd be the previous values of the memory from before. If you're just exploring memory, you'd see all these random values and try to figure out what they meant if you couldn't immediately find all code reference to where this memory was being used, which is a waste of time but you wouldn't know that.

    If instead you had just ran from a debugger, you'd see the 0xDEADF00D memory pattern being written to the uninitialized memory, so you'd know it's either padding or simply uninitamized, so you know right away to only focus on the first byte. Figuring out if memory is padding or uninitialized isn't precise, but typically you'll be able to correctly guess it with enough experience. If you see initialized data of a smaller size that doesn't fit to an 8-byte alignment, most of the time the extra bytes are just padding. Sometimes though, in some sections of code, the game just doesn't default initialize fields, but you'll eventually figure it out when looking for something and seeing your "padding" bytes don't have values of the 0xDEADF00D pattern. I'll get more into this stuff when I get to the memory exploring section later on.

    With all the clients loaded in memory, the first thing I did was just run some basic patterns across all clients to understand changes over time and see which patterns have the most coverage. There are some code patterns that work for literally years, and others that seem to break every big update. Trying to figure out good patterns for things is tricky because systems in the game can change at any time. It's not a matter of your patterns, just a matter of if the game is changing certain systems or not. In a sense, the best way to update 200k patterns in 1-2 hours is to not have 200k patterns in the first place. I'll get more into this when I talk about the conclusions I've made. Getting back to running patterns across many clients, there's a bit of a decision you have to make here. It's infeasible to write patterns that work across all clients; too much changes over time. However, you also don't want your patterns to be short-lived either.

    How do you make patterns that are "future proof"? The short answer is, you can't. I could have patterns that work flawlessly for 300 clients, and then they release client #301 which breaks everything due to compiler changes or massive game revamps, etc... I think you really need to understand this point, because you need to decide on what you want to actually want to work towards with your patterns. You can have patterns that work most of the time, but when they break, it requires a lot of time to fix, and usually that's during league start when you need updates as fast as possible. I'm skipping ahead a bit, but basically the conclusion I came to is: if you want to solve the league start downtime program, you need two projects. The first is a super-lite, limited functionality version that just does the bare minimal and will thus require the minimal amount of time to get working on league start. The second is your full blown version with all the bells and whistles.

    The reason gets back to not being able to future proof your update process. I say this from having a sample size of basically 10 years doing updates for Path of Exile stuff. There's been years where patterns didn't change much between leagues and everything felt great, and then there's been years where it seemed like they were breaking every single thing possible just because. I've had patterns that worked for years magically break not on the league start patch, but some hotfix that came days later and it's always like WTF. Anyways though, point is there might be times where it seems like your patterns are perfect and they last multiple leagues and everything is great, then some big changes happen and you have to remake a lot of them. The only way to be able to handle that situation is with two projects.

    The obvious problem with two projects is just that, you now have two projects, and each project needs to work differently, unless you can come up with a unified version that can support adding and dropping features without breaking everything else. Ultimately, that was the problem that was not realistically solvable with Exilebuddy, I knew an "EB-lite" version would help alleviate the massive downtime, but it was a struggle doing 1 project as it was, and going from a full blown api to cutting out the vast majority just so it worked faster on new leagues would result in a major identity crisis of why even do the full api in the first place? At some point, you have to decide: "few features and fast updates" or "many features an slow updates".

    Getting back to specific offset updating strategies, byte patterns are one way, but they're really easy to break. When the compiler re-order instructions or uses a different register that results in 2 byte encoding instead of 1, all of these things slow you down. One strategy I first tried with Exilebuddy back after a compiler update nuked virtually every one of my patterns was switching to heuristic based asm instruction matching patterns. So for example, rather than using a byte pattern to find something like "mov [0x12345678], rax", where you're trying to find the 0x12345678, I'd instead just look for a mov instruction with the first operand being a rip-relative memory access, and the second operand being a 64-bit gpr register. Basically, I'd match instruction behaviors rather than the byte patterns, which is what heuristic AVs do to try and detect certain mutation variants of known malicious code.

    This helped a lot, but still runs into the same core problem: the game can just break all your work any update. However, while it's more complicated to implement, I pretty much switched to only using this process for more modern PoE stuff, just because it tends to last longer and is more flexible. Right now for C++, I prefer bitdefender's bddisasm library (but there are others) and when I have to do it in .net (which I try to avoid because it's way too slow) I'll use SharpDisasm, which is just a C# port of udis86 (what Buddy used for a few things back in the day). When trying to run logic across 100s of clients, C# was too slow to scale, so I had to switch to C++ to make it work. We're talking the difference of being able to do things in seconds vs minutes, so it was a pretty big deal.

    To give one example of this process, I'll refer to the process I use to update LocalData/InstanceInfo: gist:0da4c7f8675baed762c2dd2d8c4a4aa6 . GitHub

    I search for the specified string in rdata to find its reference in the code section. From there, I know the new/ctor for LocalData is before it, and the new/ctor for InstanceInfo is after it, so I determine their addresses by finding the respective call to new, and then I know the next function call is to the ctor itself. The only "pattern" I'm using is the string text itself, and this process has worked this way since probably 2012 when I first started using it. Basically, I don't have to have code patterns to find two constructors for these because I'm searching for it another way.

    In knowing the ctor for both, I can then deduce the size, since it's passed to new beforehand. That means I can know if the size of these changes every update using the same process. To talk about something new I didn't do during EB days, I invested the time in writing "structure reconstruction code". Basically, I have a setup now where I give my struct builder a ctor and the size, and it will attempt to rebuild the structure as best as possible based on the memory accesses in the ctor. It will never be 100% perfect due to the way C++ works and compiler optimizations, but it's pretty darn good.

    That in itself saved massive chunks of update time, because when GGG add one pointer in the middle of the structure, I don't have to go and manually update the names of everything anymore since its automatic. However, I just mean the offset based names of the unknown stuff, I still have to re-associate labeled names of known stuff, but it's the difference of only having to spend 5% of the original time as opposed to 100% of the time, so it's a big deal. I've not updated and ran my stuff for most of 3.15, but this is an example of InstanceInfo automatically generated: [3.15.0.3]InstanceInfo.cs . GitHub

    I would then use a C# program to dump the structure and log the memory to a text file, so I could update names as needed and look at new changes, because I'd have the output from the previous game version so I can just diff the structs to see what got hit in an update. I used this process for all of the main stuff in the game, and as a result what used to maybe take at least 3 days to update, was cut down to roughly 12-24 hours of league start while still having a massive amount of things to update. I've obviously skipped a lot about the process, but just to give an idea of actual improvements I saw changing the way I go about this stuff.

    Just to recap everything thus far:
    - Any new update logic I write, I backtest against clients to get an understanding about how long it might last. The more "modern" clients it works on, the better.
    - I rarely use byte patterns for code anymore, instead I use instruction based heuristic matching
    - I invested into writing structure reconstructing logic to save me from having to manually manage the large structures
    - While not directly mentioned, I write a bunch of utility programs to help speed up work for relabeling things (as I used to manually update the names of everything during EB days) and dumping memory to visually confirm things that change and where things might have been moved to

    When it comes to components and game files (like WorldAreas.dat, for example) I use these same processes, but I've written additional logic to automate the process of finding and dumping all of them. So for example, I don't have to do component updates by hand anymore, because I just auto-generate all of them and use the struct rebuilder to generate their layouts. Likewise with the game file formats, I wrote code a few years ago to extract the exact format from the client, so I have not had to manually update game file layouts for ages. I still have to label things though, but the bulk time consuming work of figuring out data type sizes and struct sizes is long over. That stuff tends to last quite a few leagues, so with 3.15 I need to do the regular updates to get it working again, but the time investment in doing this stuff is 100% worth it.

    To give old examples of these two things, here are the outputs of:
    WorldAreas.cs: https://gist.github.com/pushedx/e211...35af7a6e3149d4
    LifeComponent.cs: https://gist.github.com/pushedx/74bc...edb6e99646bf82

    Most of the time, the fixed bytes in structs are just padding, so you can see the benefits of not having to waste time trying to figure that stuff out.

    Oh and to tease something else possible. I brute forced pointers from the generated structs to associate different known types. Here's what my actual final WorldAreas.cs looks like: https://gist.github.com/pushedx/6787...badc25a06e41c7

    Since I write code to auto-dump via reflection, figuring out data in game files was much faster and easier this time around than back during EB days. Everything comes together more nicely now and I have a bigger picture view of everything thanks to all the data that can be dumped. I should also mention, most of this stuff is only possible because of x64. I actually looked into doing these things for the 32-bit client back during EB days, but it was just infeasible due to 32-bit calling conventions and the way the compiler worked back then. Nowadays, x64 generated code is super clean and very easy to work with, so I worked out all of this stuff after realizing how much better the x64 client was to work with than the 32-bit a few years ago.

    I'm skipping over the exact implementations of everything because there's a lot of moving parts and work to be done to achieve all this, so I'm not trying to pass this off as being trivial and something anyone can do in a weekend. I spent years building up what I have (and I took a break at the end of 3.14) and there's a lot of annoying little issues to work around for everything. For the long term, it's certainly been worth it because the amount of updates I had to do on a regular basis post-EB have been exponentially less than during EB-days, but that brings me back to some of the conclusions I skipped ahead to earlier.

    If the goal is to have 1-2 hour downtimes on league start, the only way to guarantee that is to have a separate, minimal project that just doesn't need much to work. The reason for that is simple, when you build up a complex, lots of moving parts update process like I now have, you're spending a lot of time building up something that you then have to also update on league start, which most of the time, isn't going to be a few hours, even though the total time for me has been way less than doing the API updates of EB. 3.15 broke some of my stuff, which is fine, but the benefits of having such an extensive update system is negated if it's going to break on the day you need it most, which is the inherent problem with Path of Exile. Since GGG doesn't do PTR testing and no real testing it seems anyways, you're always going to have this situation where you either spend a lot of time updating the final product, or you're going to spend a lot of time updating the tools to update the product.

    I had been working on an idea to solve the whole 2-project issue, but I decided to take a break post 3.14 for a while. The original idea was to build an "api builder", or basically rather than making EB, make a project that generates projects like EB. That way, I could update a minimal amount for league start, generate a project using only the update stuff, and then slowly work on updating everything else as the league went on to add more features and whatnot. Progress wise, it was great, I felt like I was on a good track and had solved a lot of problems, but ultimately the deciding factor that made me step away was that there is only 1 problem I can't solve, and that problem is the only problem that actually matters. "If people can't easily get away with botting/rmt, then all this working I'm doing is pointless"

    The problem with doing all the things I have over the past several years is that it's been insanely time consuming. There's no point where it'll ever stop being time consuming. That means to "make it worth it", you need a certain amount of paying people using it. Then, you run into issues with too many people using the same thing, which I had a solution for on paper, but it's still a concern. The reality is, what those people need is virtually league start ready stuff that just works right away so they can try to get ahead of GGG's server sided detection and take advantage of the economy that first week. What is needed at league start, isn't the same as what is needed 1-2 months in, so that's what led me to the 2-project conclusion. However, that 3rd month is a dead month, and botting/rmt is pretty much a guaranteed ban for most people, so everyone just waits for the next league. The game has had a toxic update cycle since 2016, with a massive amount of players playing month 1, then it falling off a cliff month 3. That means you have to milk that first month, so that naturally conflicts with what I want to have and want to do, so there was no way for me to solve the problem that actually mattered.

    But anyways, I strongly feel now that the best way to solve the problem is by finding a different problem to solve. You can speed a lot of things up like I did, but you're going to have to pay the time cost somewhere eventually. As a result, you need to find new and creative ways to do things differently. For a HUD project, that just means bare minimal features for the start, and then slowly enabling things as they get updated. Trying to design a project in such a way is tricky though, so that's why I was exploring the idea of an api generator, which then in turn would be used with a "minimal viable product" generator. I feel like it can work, but the problems with PoE just aren't worth me trying to solve anymore for now. Maybe that will change in the future, but it seems like GGG is intent on running the game into the ground, which is also demotivating.

    I think that about covers it. I know it's another tsunami of text, so feel free to ask for clarifications or if I left something else out!

  3. Thanks GameHelper, Aoooooooo, Chav, dirkbach666, TehCheat, Sychotix, Williamwillbera, lok0919 (8 members gave Thanks to pushedx for this useful post)
  4. #3
    GameHelper's Avatar ★ Elder ★ CoreCoins Purchaser
    Reputation
    2453
    Join Date
    Jun 2015
    Posts
    3,048
    Thanks G/R
    454/2198
    Trade Feedback
    0 (0%)
    Mentioned
    65 Post(s)
    Tagged
    1 Thread(s)
    Originally Posted by pushedx View Post
    The obvious problem with two projects is just that, you now have two projects, and each project needs to work differently, unless you can come up with a unified version that can support adding and dropping features without breaking everything else. Ultimately, that was the problem that was not realistically solvable with Exilebuddy, I knew an "EB-lite" version would help alleviate the massive downtime, but it was a struggle doing 1 project as it was, and going from a full blown api to cutting out the vast majority just so it worked faster on new leagues would result in a major identity crisis of why even do the full api in the first place? At some point, you have to decide: "few features and fast updates" or "many features an slow updates".
    100% agree with this. I am going the route of "few bare-minimum features and fast update".

    Originally Posted by pushedx View Post
    Getting back to specific offset updating strategies, byte patterns are one way, but they're really easy to break. When the compiler re-order instructions or uses a different register that results in 2 byte encoding instead of 1, all of these things slow you down. One strategy I first tried with Exilebuddy back after a compiler update nuked virtually every one of my patterns was switching to heuristic based asm instruction matching patterns. So for example, rather than using a byte pattern to find something like "mov [0x12345678], rax", where you're trying to find the 0x12345678, I'd instead just look for a mov instruction with the first operand being a rip-relative memory access, and the second operand being a 64-bit gpr register. Basically, I'd match instruction behaviors rather than the byte patterns, which is what heuristic AVs do to try and detect certain mutation variants of known malicious code.
    Have you looked at Ghidra pattern? (as shown here) This pattern is on bit level rather than byte level. So it becomes more stable/heuristic based rather than operand based. Also, 1 more thing, GGG has learned not to do Big "compiler changes or massive game revamps" on league start, they do that 1 or 2 weeks before league start. This behaviour might help us update our patterns 1 week before league start and then pray to GGG Gods that they don't break.



    I search for the specified string in rdata to find its reference in the code section. From there, I know the new/ctor for LocalData is before it, and the new/ctor for InstanceInfo is after it, so I determine their addresses by finding the respective call to new, and then I know the next function call is to the ctor itself. The only "pattern" I'm using is the string text itself, and this process has worked this way since probably 2012 when I first started using it. Basically, I don't have to have code patterns to find two constructors for these because I'm searching for it another way.

    In knowing the ctor for both, I can then deduce the size, since it's passed to new beforehand. That means I can know if the size of these changes every update using the same process. To talk about something new I didn't do during EB days, I invested the time in writing "structure reconstruction code". Basically, I have a setup now where I give my struct builder a ctor and the size, and it will attempt to rebuild the structure as best as possible based on the memory accesses in the ctor. It will never be 100% perfect due to the way C++ works and compiler optimizations, but it's pretty darn good.
    that's really cool!!!
    Last edited by GameHelper; 08-23-2021 at 09:34 PM.

  5. #4
    pushedx's Avatar Contributor
    Reputation
    257
    Join Date
    Nov 2009
    Posts
    137
    Thanks G/R
    8/135
    Trade Feedback
    0 (0%)
    Mentioned
    12 Post(s)
    Tagged
    0 Thread(s)
    Originally Posted by zaafar View Post
    Have you looked at Ghidra pattern? (as shown here) This pattern is on bit level rather than byte level. So it becomes more stable/heuristic based rather than operand based. Also, 1 more thing, GGG has learned not to do Big "compiler changes or massive game revamps" on league start, they do that 1 or 2 weeks before league start. This behaviour might help us update our patterns 1 week before league start and then pray to GGG Gods that they don't break.
    This looks like it'll end up having the same exact problem as using byte signatures. The problem isn't with matching the instructions you need; it's how do you match re-orders that result from minor changes in the generated assembly. In the example shown, no reordering is possible, because everything shown there has to execute in a very specific order. For more complicated code though, you can't guarantee the order will be preserved between client updates, because the logic itself could have changed slightly (such as a local variable now or no longer being used) or a code change affected the output in such a way where a register that uses a larger or smaller encoding results in a different number of bytes to be used, which you can't match both with only 1 signature (you'd have to operate based on instructions instead).

    For example, taken from the current 3.15.2.2 client:
    Code:
    00007FF7ADB51FF3             | 0FB6DB                              | movzx ebx, bl                                    |
    00007FF7ADB51FF6             | 83F8 02                             | cmp eax, 0x2                                     |
    00007FF7ADB51FF9             | 41:0F44DF                           | cmove ebx, r15d                                  |
    00007FF7ADB51FFD             | 41:BA 4C010000                      | mov r10d, 0x14C                                  |
    00007FF7ADB52003             | 6548:8B0425 58000000                | mov rax, qword ptr gs:[0x58]                     |
    00007FF7ADB5200C             | 48:8B08                             | mov rcx, qword ptr [rax]                         |
    00007FF7ADB5200F             | 45:33C9                             | xor r9d, r9d                                     |
    00007FF7ADB52012             | BA 880D0000                         | mov edx, 0xD88                                   |
    00007FF7ADB52017             | 45:8D41 10                          | lea r8d, qword ptr [r9 + 0x10]                   |
    00007FF7ADB5201B             | 41:0FB70C0A                         | movzx ecx, word ptr [r10 + rcx]                  |
    00007FF7ADB52020             | E8 ABFBD200                         | call <_new>                                      |
    This is the allocation code for LocalData. Why does the compiler put "mov rcx, qword ptr [rax]" at 00007FF7ADB52003, when RCX is being used in line 0x007FF7ADB5201B "movzx ecx, word ptr [r10 + rcx]"? I'm sure there's a reason, but how do you handle if it were to be moved before 0x007FF7ADB5201B rather than where it is now? You don't know what else could be put in between or how the ordering might end up, so this is just a limitation of matching instruction bytes/bits.

    But if instead you track instructions based on logical patterns, you know RCX is going to be set to param1 for a function call, and RDX is going to be param 2. Rather than trying to find a reference byte signature to locate the size of the allocation, you can instead just find the call to new, then "search backwards" to where R/EDX is set, and then add sanity checking to make sure the 2nd operand is an IMM. You know this type of pattern matching will only break if the function call itself changes, and the size is no longer passed as parameter 2, which is always possible, but less likely than trying to get the right byte patterns working across tons of clients.

    As for game updates before big leagues, I've just rarely seen them release a league and not do a bunch of hot patches in that first week that end up breaking something that used to be stable. Like I mentioned before, I've had stuff working for the first league patch, only for them to make big breaking changes in a hotpatch a day or two later, so if you're going to have a large API, you're most likely going to encounter that as well, but if you only have a small one, then chances are you'll be fine.

    The updates they do nowadays are certainly a lot better than in the past during EB days, as I've spent many league starts updating for fun stuff in recent times, but I mean that doesn't change the fact that the updates are beyond your control, so having your stuff wiped out randomly is always an issue. Spending more time to learn the client, and building up from the ground up around core game systems certainly helps, but I've also seen them make changes to core stuff league after league, so there's no real avoiding it. Having structures generated automatically avoids the hassles of having to fix the layouts, but you still have to do some work that just adds up over time.

    Being able to back test against a bunch of league start clients certainly helps you answer the question of what would have happened had you been using a sig, but the next league, when you have everything ready to go for fast updates, they can throw you a curveball and cause you to have to re-update a bunch of things you thought were going to be stable. Trying to find that perfect balance of having a lot of things automated for updates, and then only having to spend a little time each league update is hard, because if you spend all your time trying to get fast updates, then you won't have time to spend on the project itself. Having different people working on a project certainly helps though, but it's a lot of niche specialty work.

  6. Thanks GameHelper, TehCheat, jeissi, Shadow26Wolf, Aoooooooo, MrM000Cow, NoobToken, zyeesi (8 members gave Thanks to pushedx for this useful post)

Similar Threads

  1. [Tutorial] How to find Mana Hack for every update.
    By xviet4xlife in forum Devilian Exploits|Hacks
    Replies: 7
    Last Post: 12-17-2016, 10:38 PM
  2. [Lua Script] How to make an NPC talk every 15 seconds
    By elitebot in forum WoW EMU Questions & Requests
    Replies: 2
    Last Post: 06-25-2011, 11:03 PM
  3. How to quickly find your favorite battlegroup
    By cbass in forum World of Warcraft Guides
    Replies: 2
    Last Post: 11-28-2007, 06:45 PM
  4. how to i update my ascent rev
    By bjorn11 in forum World of Warcraft Emulator Servers
    Replies: 7
    Last Post: 11-28-2007, 02:20 PM
  5. How to find WoW Memory Offset?
    By pegaa in forum World of Warcraft General
    Replies: 0
    Last Post: 08-03-2007, 12:02 AM
All times are GMT -5. The time now is 12:57 PM. Powered by vBulletin® Version 4.2.3
Copyright © 2024 vBulletin Solutions, Inc. All rights reserved. User Alert System provided by Advanced User Tagging (Pro) - vBulletin Mods & Addons Copyright © 2024 DragonByte Technologies Ltd.
Digital Point modules: Sphinx-based search