Tree of Savior Forum

IMC Staff, please read this suggestion to deal with bots. [Corrected]

Bot simply filters to not attack “the machine”…

Banning the relationed accounts sounds very fair to me.

That is why, if you read the post, you would see I suggested a random name each time.

Attacking via name is one of the possibilities. At the very least it is an entity and you can at the very least rule it out by comparing its ID with all known entities. Then it won’t matter what name it takes because it isn’t attacked because it isn’t matched.

You could also go and say you attack mob a,b,c,d in the zone everything else you do not attack no matter what it is. Another easy solution to bypass them since there are like 4-5 different mobs per zone.

I can go on, but there is no viability to this.

Bots see what the human sees and can therefore be directed in a fashion that circumvents dumbfire solutions

What IMC needs to do is attack the programs, and not the bot users behind it. Both is fine, but we will get no result without macro protection or prevention. Players are turning these on and off as they please, so they don’t have to do any work. For others, when they’re alerted by an in-game chat detection, some of them allow control as if they were a real player, and turn off all functions of it. That makes it tough to not falsely accuse someone.

Also, as a request to IMC, please add a captcha to the bot reporting system. On both the players and bots end. Notifying the player whether the users request got through after the accused bot incorrectly answers. (If it does, lock them to that channel, and give the accused bot a captcha every 5 minutes until they reach the login screen.)

Most importantly, don’t give it a time limit, and disable everything else while it is active. With this system IMC could record incomplete, and complete requests and prioritize the players with more failed captchas.

Any griefing could be reported on the forums by the accused bot. Though IMC could also handle that by the amount of times a player reports the same person by captcha.

Horrible idea. Even if you could somehow make the normal playerbase to not abuse it, botters would annoy normal players until everyone would cry and beg IMC to remove it. And that is if you even get past that if part.

even with a random name, that won’t stop them.
the server has to tell the client what kind of “thing” it sees on the screen, and the bot just listens for “the thing i’m looking for”.
Name is not even the easiest identifier for a human player to check – you probably recognize most mobs by their picture first.

a bot could…
…listen to the “identifier” code the server sends
…check the art that the client calls up
…compare the monster to a list of “known mobs” for that map

…probably a lot of other things i’m not thinking of at the moment.

but in the end, the server -must- give the client correct identifying data, or the client won’t know what to draw.

a well-enough written bot can probably see a lot more than a human see, actually

if it’s simply a basic recorded macro, there’s nothing to actually detect. windows tells the game client “there is a mouse click here.” and the game doesn’t know the difference between a scripted click and a “live” click.

i wouldn’t use it, if i had to f**k around with a captcha, and i know i’m not the only one that would abandon it.

real bots can beat a captcha much more easily than the people that made captcha would like you to believe… and if it’s intended to prevent abuse, this does the opposite, causing a real player who is falsely reported to have to deal with captchas every few minutes. and what about someone just sitting AFK while they go have lunch?

reporting something in the forums is only worse than using /say to ask if a gm is around. the devs do not have time to carefully monitor every thread in the forums. far more likely is that the victim will simply get laughed at more/ get lots of “i hope it works out for you” posts.

Yes a bot can see even more, but for an explanation sake it is sufficent to make the point that nothing will elude the bot, if a human can perceive it.

An afk player would simply have a captcha on the screen, and an incomplete report as they are handled now. That is why there is no timer.

There is nothing stopping you from doing so as it is now, and any griefing would take legit players too long getting past the captcha to keep up with the accused person.

Bots would be taken out, and flagged for multiple griefing quicker if they decided to go that route. Get it wrong and the player simply goes back to the title through the in-game menu to remove the effects. Although bots may be able to solve them quickly, and accurately, they would still have an incomplete report that the moderators could check into. This system would be more for removing the simpler ones distributed with the same code first.

Excluding you apparently, others would use it as long as they handed out rewards as mentioned for true success.

Lets say a griefer reports someone twice. That is on record, and the mods would check into something like that as well. Multiple people do it within a short period? Check into it, there is someone to be penalized either way.

EDIT: On second thought, a player doesn’t even need to report the same player twice, so why even give them the option? (Don’t know if that is how it is currently.)