Our Approach towards Smart Learning
At Studycopter, we truly believe that a smart learning approach versus rote learning is the key to a more efficient learning and eventually a better exam performance. It is very important for us that our students give their best performance to truly reach the pinnacle of their skill. That is why we don’t claim we strive to get the best score for all, which is practically not possible but instead the best score you can achieve by providing you with the right tools including study notes, practice exercises and mock-tests to give you complete preparation. We have our approach rooted on two pillars – chapter recommendation and true skill assessment. ‘Chapter Recommendation’ provides an action plan for the student, depending on how the student is performing, his past scoring pattern, identifying the areas he/she is weak at etc. This plan is in no way binding and thus if a student feels he needs to follow a different path, he can take the route most comfortable to him. Once we have given him the tools, our duty next is to ensure we assess the student’s skill to the best possible accuracy so that both of us where a students stands and thus plan corrective measures accordingly. For this our ‘True Skill Assessment’ algorithm has been developed.
On the ‘Chapter Recommendation’ front, we have focused on building the best capability in recommending and suggesting which chapters/exercises to read next based on their difficulty level as well as their weightage in the GMAT exam. We have further classified each section into various chapters and then further into specific practice exercises to learn from. The chapters which are more difficult but have lesser weightage overall (Type C) are not worth spending time on towards the end of your preparation, whereas on the other hand, if the chapters are easy and also weigh more on the GMAT (Type A), it makes more sense to finish them earlier so as to maximise the score you’re getting in your practice mock tests.
Consider the following chart:
We make a chart for each and every student which is customized as per his performance in the diagnostic tests and practice exercises he has taken on our website. Due to its real-time nature, this chart is constantly changing as you proceed with your preparation and you get better and better at solving GMAT-style questions.
Easy to understand stats and breakdown analysis gives you all you need to know about your performance so you have full visibility into where you stand at all times. We believe that the more you know about your strong areas and areas for improvement, the more targeted your prep approach will be. That is why we track a lot of metrics including time taken per question, difficulty level of each question, priority index of the chapter involved, past performance in similar exercises, subject id of the question and how others have performed in that question.
The other part of our approach is giving the student true assessment of his skill. Some test prep companies that exam preparation ends with the right recommendation and that if the student is performing well in those chapters, he would likely score higher on the GMAT exam as well. The same belief leads to many surprises for quite a lot of students on the final day. Nothing could be further from the truth. Although, it is LIKELY that if you’re performing well, you will score higher on the GMAT exam, but it is something which may not happen always. This is majorly because of the scoring algorithm used by GMAC to score their GMAT exams. They use a scoring system know to the world as Item-Response Theory or IRT. IRT was developed around the 1950s and 1960s by Educational Testing Service psychometrician Frederic M. Lord, the Danish mathematician Georg Rasch, and Austrian sociologist Paul Lazarsfeld, primarily to deal with lack of accuracy in the classical test theory. The main problems encountered in the classical test theory was that it did not account for the question difficulty as the question deemed difficult by the testing body may or may not be difficult with the test takers, also the variance in the difficulty versus the question worth in marks was not well established. Other big problem with this theory was that it had no way to disregard the flaw in scoring due to someone guessing the answer rather than knowing it. With IRT as well, we can’t know for sure if the answer has been guessed but the algorithm certainly takes care of it statistically.
IRT essentially introduces 3 parameters for every question. Its failure rate, its ability to differentiate between top scorers and average or low scorers and thirdly its ability to give less weightage to questions where a test taker might have a higher chance of getting an answer right as a result of guesswork, represented by b, a and c respectively. These three parameters are never assumed but calculated based on the response of all the test takers who attempt a question. And as more and more test takers attempt a particular question, the three parameters become more and more accurate leading us to the true score of a test taker.
The IRT equation looks something like this but it has a lot more to it than just this one equation:
where p is the probability of getting the question right by a person with a skill level theta.
Studycopter employs the same approach to score its practice, diagnostic and mock tests. Currently we’re in the process to collect data for our question bank from our beta users. As more and more data is collected, we would be in a position to introduce quite accurate scoring, the one which will be a very realistic assessment of your true skill thus giving you the correct reflection of your preparedness.
Here is another research papers to help you dive deeper into our approach.
Overall we ensure:
- Where you are coming from
- Where you are going
- Keeping track of your progress
- Providing motivation to study by egging you on to study and making it more fun
- Making self study reflective. As our users are strapped for time, it becomes critical to maximize time utility and not waste time on efforts that have minimal probability of inducing a score bump