Tree of Savior Forum

Captcha system for bot fighting?

it is easy to fight bots.
1- put a GM online
2- add to the the obligation to respond to the gm standards and questions, no one will be farming back without the ability to speak.
3- GM bans those who respond poorly, with x time or simply adds an count, if you get caught 3 times boting unanswered permanent banneo.
4- bots create more accounts, ok, if the gm is active every day as a job should be easy to clean bots medium - high level are those who bother.

Drawbacks: GM salary

Okay, I have no idea what these things meanā€¦ Especially that last one sounds like a Harry Potter spell

Here is an online course even that teaches you some of the (beginner) advanced techniques in computer learning. One of the first demos in the course video shows how you can train a computer to recognize handwritten digits (but this technique can be extended to letters too!):

Coursera: Neural Networks

what is the point youā€™re trying to get across? because i donā€™t really get it.

if itā€™s about the captcha iā€™ve ditched this idea a long time ago, maybe i should update the main post.

Thereā€™s one thing bots arenā€™t very good at: Common sense.

Instead of making a simple ā€œtype the letters that appear in the boxā€ captcha, you could make a captcha that consists of several trivial yes/no questions to be answered by the player.

For example:

  • A player is spamming on the chat to prove his point, is he correct to do so? (Yes/No)

Note that, even question answering is still a topic covered and widely used by machine learning projects. But when the questions include in-game player knowledge and common sense (such as telling if being a scammer is right/wrong), then things start to get really complicated.

The set of questions could consist of 10 questions that a player has to answer every 24h of in game time, with a silly reward in case they get it right. If the bot were to try answering them ā€œby brute forceā€ (that is, guessing the right answer with 50% chance), there would be a (1/2)^10 = 0.097% chance to get all 10 of them correctly. This means around 1 out of 1000 bots would get through, but still, it would reduce the number of bots by 1000x.

The biggest flaw of this system, however, is that questions can be datamined and have their answers memorized by the bots. This means the questions would have to be updated and changed with certain frequency, which would be a big annoyance to the developers.

EDIT: Special note on the 24h in game time delay between set of questions: The system would have to be prepared for every situation possible before triggering the bot-check. Popping a 10 questions set to answer while youā€™re in the middle of a boss fight isnā€™t fun. Things like that would have to be taken into consideration and well thought by the developers.

1 Like

You forgot to add: with varying success.
A lot depends on the captcha itself. If its a plain text without artifacts to make reading worse, a bot can copy paste it 100%. In the end it all comes down to the quality of the cap.

Make a half way useable one which trips up bots a few times. Fail it 5 times because you are either a bot or suffering from a severe case of dyslexia and be banned for a day.

ā€¦todayā€™s Artificial Intelligence technology can solve even the most difficult variant of distorted text at 99.8% accuracy. Thus distorted text, on its own, is no longer a dependable test.

  • by Google research

Nowadays, bots read text and classify images better than humans. This may be weird to imagine, but itā€™s true.

Google is now using new captcha methods that instead of solely relying on the ā€œanswerā€ from the user (such as the typed text), they now also analyze the whole process it took from the user to input the answer (such as mouse movement, typing speed, clicks positioning, etc).

This may fall a bit off-topic, but itā€™s a good read if we were to think of new ā€œcaptchaā€ ideas for the game:

[quote=ā€œmrshadowccg, post:27, topic:152765ā€]ā€¦todayā€™s Artificial Intelligence technology can solve even the most difficult variant of distorted text at 99.8% accuracy. Thus distorted text, on its own, is no longer a dependable test.
by Google research[/quote]
It must be true because its on the internet!
Sorry, but hardly any net article escapes click bait claims, this is no different. Whether google or the military has developed AIs capable of such accurate guesses does not matter, we are talking about small time bots for games.
If you put in some common sense, that 99% melts down to a bunch of factors where small time bots score between 20 and 60%.

1 Like

You clearly have no knowledge of this subject at all.

Hereā€™s a nice compilation of different academic papers towards different datasets of both digit recognition (MNIST Dataset) and image recognition (Cifar datasets):
http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html

As you can see, with academic explanation of the methods and techniques used behind those numbers, simple digits recognition can reach around only 0.28% error rate while simple image recognition reaches 96.5% accuracy. Those numbers are obviously comparable or even superior than most humans taking the classification tests manually.

Distorting the images or texts does not make the numbers fall much behind that. Of course, you still want a human capable of reading it, so thereā€™s a limit to how much you can distort.

If you still donā€™t believe machines can get past those captchas with certain ease, I recommend you making a text predictor yourself. Thereā€™s a fair ammount of machine learning programs for begginers that i could recommend, one of them being Weka: http://www.cs.waikato.ac.nz/ml/weka/

You can make a 90% or above accuracy digit classifier with extreme ease using Weka and MNIST. This isnā€™t just Google, itā€™s anyone. Letters donā€™t differ much from digits, you can extend this by just moving into a proper dataset.

Again, you have no idea how bots or probabilities work.

You are still talking about ā€œlearningā€ and falling behind by ā€œnot muchā€.
Capthas are not meant to pop up endlessly for you so you could teach an AI how to work around them. Your definition is by design a system i have suggested to be retarded.
I can only repeat myself: An awfully lot depends on how a cap system is executed. If you do it with plain texts and unlimited attempts to try, well jolly frikin sure that you can crack it with any third rate bot or AI.

Bots can make new accounts, so in a sense they do have unlimited attempts.

As long as the text can only be modified so that human can recognize it as what needs to be answered, the AI can be programmed to recognize the same.

And if you mess with the text too much, then actual players will fail.

And we are already at a point where it just too much fuss to bot.

Lets try not to go full retard once, Garfy

Thatā€™s not how they learn.

They have a separated training set of data where they ā€œlearnā€ prior to making server verifications.

Usually the training is done offline and then when the bot developer is ā€œhappyā€ with their high enough accuracy results through testing, they are put into practice (that is, only then they are submitted to server verification).

This means the bot doesnā€™t go through many tries, it guesses the correct answer the first time it sees the captcha with a 99.7% chance to be correct.

I never mentioned unlimited attempts per try. Any brute force method can break through unlimited attempts if thereā€™s no delay or penalities for failing.

And again, you think text recognition is the same as text interpretation. It is not. Reading a text and having to answer something based off common sense is a task exponentially harder than just plain recognizing a distorted text in a captcha.

Doesnā€™t matter if its a real text or a text in an image. Computers can already ā€œconvertā€ one into another absurdly easy.

I find it hard to believe bot operator have a borderline super computer to run so many captcha analysis to farm a game.

I doubt one captcha require a super computer, but generally games with captcha have it at multiple areas. To make bot investment worth it, you need ALOT of botsā€¦ simple bots, one commercial grade computer can run alotā€¦ a learning AI program? You need something bigger than that.

Sure, a distributor version of bot is highly probableā€¦ but u still need a super computer one that made the analysis first, before distributed version is out.

I believe it is more likely that a human operator solve the captcha for the bot, and let the bot do the rest. Far more realistic. They probably hire or force someone to do all the captcha with a line of computers routinely.

Bots can hide under the ground and still 1-shot whole map with autoattacks, so you might not actually ā€œseeā€ them, kek.

You donā€™t need a super computer to run bots. Not even for the ones that get past captchasā€¦

Like I said before, thatā€™s not how they work. They donā€™t learn how to get past a captcha on the runā€¦ I mean, they could, but itā€™s simply not the most efficient way to get through it.

The game doesnt need to ā€œseeā€ them to send them a captcha request. I donā€™t get your point. Plus that sounds more like a ā€œhackingā€ program like a packet changer rather than a common bot.

Noā€¦ you can run newer algorithms on a powerful desktop or even laptop computer, they make use of neural net and newer machine learning techniques:

Additionally, for captchas that a computer canā€™t solve, the bot can be written so that it pages or signals a human operator to come drive the bot to get past a GM or captcha challenge.

Finally someone who understands of the subject.

Thereā€™s also chinese companies that hire employees just to type in captchas 24/7 for their bots to get past verifications.

Captchas are already flawed by nature. Itā€™s a very weak type of security. If they wanted to add a real anti-bot system, iā€™d suggest something I posted on another topic:

If organic behaviour detections were made with certain frequency in game, bots wouldnā€™t stand a chance.

So I guess we are beyond the captcha conversation, needless to say there are ways around it for botters, and itā€™s discouraging for other players, other than giving them a feeling that something is attempted by the publisher to solve the issue.

A reporting mechanism similar to what you talk about in your edit is a good move, allowing other players to report botters and hackers in game is probably the best step a publisher can take to combat the problem. This measure requires a quick response team and a vigilant community, and also comes with the issue of troll players who will abuse the system in an attempt to render the system useless.

Additionally, creating some sort of auto bot system in the game is the best way to remove 3rd party applications from the equation, however that is typically an unpopular move.

Ultimately theres no great answer to the botter problem, and the best possible solutions tend to be unpopular either to GM or the playerbase.