When it comes to the algorithms used for similarity comparison against peers, we obtain a weighted average of three unique tests, all of which produce a result based on logical similarity and language similarity. For web matches, we have a passive machine learning layer as well as another set of unique tests to check for web source similarity. Most cases of plagiarism are through students copying the code from the web such as GitHub, Stackoverflow and more. Along with the billions of sources checked on the web, popular source code websites with content blockers such as Chegg and CourseHero are also checked. When a professor confirms a case of plagiarism, our algorithm improves its knowledge of features that contribute to plagiarism, as well as improves its confidence level. As cheating evolves, and students try to beat detection systems, it requires that the detection system is able to improve and change, thus our system is constantly learning better strategies every day, and being improved constantly. The most important aspect of Codequiry's checker is that the results you obtain are extremely meaningful and detailed (it's not just a percentage similarity) allowing you to investigate potential cases of plagiarism with provided evidence. When a submission is flagged by our system most likely there is something going on with that submission.

Did this answer your question?