We had a bit of a problem in Intelligent Practice, where users would
sometimes see the same type of exercise (that is, the same template) repeatedly. This made it look
like there weren't many exercises available and made for a frustrating
experience.

The good news of the day is that this, the groundhog problem, has been resolved. Here is how.

Let's say our user — call her Rita — is practising the Chemistry chapter on Acids and Bases. Let's say this chapter has 20 templates. Each template represents many different exercises since each exercise is generated programmatically from a template, with different chemical compounds, scenarios, etc. Still, all level-of-mastery calculations are done on a per-template basis since a template represents a

The template sequencing algorithm presented templates in a way that would challenge the user by just the right amount. Too many difficult exercises leave the user demoralised. Too many easy exercises mean that the user won't be stretched enough to learn anything.

Let's say the figure below represents the probabilities that Rita will successfully answer exercises from each of the 20 available templates (represented as dots). Some templates are quite difficult, relative to Rita's level of mastery, and have low probability of being answered correctly. Some templates are easy and these have a high probability of being answered correctly.

The sequencing algorithm tried to balance the difficulty of the exercises shown to Rita by picking out templates that have a probability of success, \(P(\text{success})\), between 0.5 and 0.7. This means that she'll have a reasonable chance of getting it right, but will have to do some thinking — exactly what we want for optimal learning from practice.

###

Every now and again it happened that there were only 1 or 2 templates in the [0.5, 0.7] range, with other exercises of a similar difficulty just barely outside this range.

In this case Rita would see the same type of exercises over and over, even though there are actually other slightly easier or slightly harder exercises available, which should also really be shown to her.

Here's the mapping. The height of a dot indicates the relatively probability, \(p\) of a template being selected next.

As the red dots above show, exercises that used to be in the [0.5, 0.7] range still have a good chance of being selected. Meanwhile the templates that were just to the left of the 0.5 threshold now also have a chance of being picked — a smaller chance than the red dots.

This already solves the groundhog problem, since there is no hard decision boundary and all templates have some chance of being picked (although that chance is small for templates to the left and the right of the plot above). But while we're at it, why not add some more new features.

\[1 - \frac{1}{r}\]

The template with a recency of 1 (the template that was just seen) will now have a probability of 0 of being seen again.

\[\frac{n+4}{2n+4}\]

This does not penalise templates that have never been seen, and reduces the relative probability by a factor of at most 2 for templates that have been seen a lot.

So, the final formula for the relative probability of showing a given template next is

\[ p \times \left(1 - \frac{1}{r}\right) \times \left(\frac{n+4}{2n+4}\right) \]

P.S. If, after reading this post, the name of the problem still doesn't make sense, you should watch the classic film

P.P.S. The plots in this post were generated using the wonderful xkcd-style plot generator for matplotlib by Jake Vanderplas.

The good news of the day is that this, the groundhog problem, has been resolved. Here is how.

### The old way

The learning algorithm behind Intelligent Practice keeps track of the level of mastery of each user. A user's mastery score is used to determine how likely it is that she will get any given exercise right. I won't go into the details of how this is done, but it gives us the starting point for the groundhog problem.Let's say our user — call her Rita — is practising the Chemistry chapter on Acids and Bases. Let's say this chapter has 20 templates. Each template represents many different exercises since each exercise is generated programmatically from a template, with different chemical compounds, scenarios, etc. Still, all level-of-mastery calculations are done on a per-template basis since a template represents a

*type*of exercise.The template sequencing algorithm presented templates in a way that would challenge the user by just the right amount. Too many difficult exercises leave the user demoralised. Too many easy exercises mean that the user won't be stretched enough to learn anything.

Let's say the figure below represents the probabilities that Rita will successfully answer exercises from each of the 20 available templates (represented as dots). Some templates are quite difficult, relative to Rita's level of mastery, and have low probability of being answered correctly. Some templates are easy and these have a high probability of being answered correctly.

The sequencing algorithm tried to balance the difficulty of the exercises shown to Rita by picking out templates that have a probability of success, \(P(\text{success})\), between 0.5 and 0.7. This means that she'll have a reasonable chance of getting it right, but will have to do some thinking — exactly what we want for optimal learning from practice.

###
**The problem**

Every now and again it happened that there were only 1 or 2 templates in the [0.5, 0.7] range, with other exercises of a similar difficulty just barely outside this range.In this case Rita would see the same type of exercises over and over, even though there are actually other slightly easier or slightly harder exercises available, which should also really be shown to her.

### The new way

The problem above is really that the system has a*hard decision boundary*. If an exercise is just a little to the left of the 0.5 cut-off probability, it would not be selected, even though it could be. The solution is to soften or blur the decision boundary. We do this by mapping \(P(\text{success})\) to a new value, indicating the likelihood of showing a template next, relative to all the other templates.Here's the mapping. The height of a dot indicates the relatively probability, \(p\) of a template being selected next.

As the red dots above show, exercises that used to be in the [0.5, 0.7] range still have a good chance of being selected. Meanwhile the templates that were just to the left of the 0.5 threshold now also have a chance of being picked — a smaller chance than the red dots.

This already solves the groundhog problem, since there is no hard decision boundary and all templates have some chance of being picked (although that chance is small for templates to the left and the right of the plot above). But while we're at it, why not add some more new features.

**Recency**We really don't want to show the same template twice in a row and generally want to make it less likely that a template that was shown recently gets shown again. Define the recency, \(r\), of a template to be 1 if it was the last one shown, 2 if it was the one before that, etc. Multiply the relative probability of seeing a template by\[1 - \frac{1}{r}\]

The template with a recency of 1 (the template that was just seen) will now have a probability of 0 of being seen again.

**Frequency**We also want to penalise templates that have been seen a lot over the past while. Let \(n\) be the number of times that a template has been seen during the past week. Multiply the relative probability of seeing that template again by\[\frac{n+4}{2n+4}\]

This does not penalise templates that have never been seen, and reduces the relative probability by a factor of at most 2 for templates that have been seen a lot.

So, the final formula for the relative probability of showing a given template next is

\[ p \times \left(1 - \frac{1}{r}\right) \times \left(\frac{n+4}{2n+4}\right) \]

P.S. If, after reading this post, the name of the problem still doesn't make sense, you should watch the classic film

*Groundhog Day*.P.P.S. The plots in this post were generated using the wonderful xkcd-style plot generator for matplotlib by Jake Vanderplas.

Hi Carl! Wat is die goed wat jy hier verduidelik? Jislaaik! Ek kan nie glo dis jy nie, jy lyk nog net dieselfde!

ReplyDelete