Drag the sliders to adjust the probabilities.
Explore on your own or follow the suggested prompts.
Hover over a parameter to see its description.
The probability that the student already knows a skill.
P(init):
The probability that the student will learn a skill on the next practice opportunity.
P(trans):
The probability that the student will answer incorrectly despite knowing a skill.
P(slip):
The probability that the student will answer correctly despite not knowing a skill.
P(guess):
P(learned) depends on whether the student answers correctly.
This probability becomes the new value for P(init).
Simulate student responses by choosing an answer button below.
The new value of P(init) if the student answers correctly.
P(learned if correct):
The new value of P(init) if the student answers incorrectly.
P(learned if wrong):
Prompts for Exploration
Find a parameter combination that increases P(learned).
Find two different combinations that result in mastery (aka. P(learned if correct) ≥ 0.95).
Press "answer correct" to verify your results.
Explore what adjustments you have to make depending on P(init).
Try a higher P(init) and a lower P(init) and compare your results.
What does this tell you about BKT?
What happens to P(learned) if P(guess) and P(slip)
stay at 0.5 and you only adjust P(init) and P(trans)?
The P(learned) formula differs depending on whether the student answers the question correctly.
However, both formulas result in the same probability if both P(guess) and P(slip) = 0.5.
What happens if P(guess) and/or P(slip) exceeds 0.5?
P(learned) is higher if user answers incorrectly vs. correctly.
Typically P(guess) is bounded at 0.3 and P(slip) at 0.1 for this reason.
Can you find a parameter combination that decreases P(learned if incorrect)?
What do you think this represents in the real world?
P(learned) decreasing with an incorrect answer might model a student “forgetting” a skill,
which is totally possible in reality.
However, BKT doesn’t account for forgetting and thus, this would actually be considered an
invalid parameter combination because P(learned) is assumed to increase regardless of whether
the user answers correctly (just by a smaller amount if incorrect).
Keep exploring! Can you find any other flaws or interesting characteristics of BKT?