28 November 2011


As you can see from the logo to the right, the Shuttleworth Foundation has very generously provided some funding towards the development of Monassis! The funding is part of their mini-grant programme.

This is great news for since I'll be able to spend more time on development and perhaps even a bit of money on hardware. Since I'm developing the platform for both browsers and fairly low-end mobile devices, it's important to be able to test on these.

Thanks SF and thanks Kathi for the nomination!

02 November 2011

Keen teachers create Monassis templates

I have to admit that I was skeptical when Mark suggested that we let teachers try their hand at creating templated questions for Monassis. To give you an idea: this requires writing at least a Python file (for the question logic) and a XML file (for the question layout and scripting the user responses and feedback). There is also the option of writing some ASCIIsvg to generate graphs client-side, at run time, based on the values generated in the Python script. Oh, and I nearly forgot, you have to write LaTeX to get the mathematics right.

All in all, no easy task.

But, a few hours later, all of them had successfully created their own templated questions. With graphs and everything!

Looks like I'm going to have to rethink my stance on distributed volunteer creation of templated questions.

18 October 2011

Version 0.2

By and a large, the past two weeks have been a success. Admittedly, I tested the boundaries of sanity along the way.

The highlights are that
  • XML templates with linked Python for logic and PNG or ASCIIsvg files for images are now stored in SQL database and can be interpreted by Monassis;
  • templates have tags (based on the chapters and sections of the FHSST books) which can be used to filter the questions the user would like to practice;
  • student, teacher and author accounts exist, with students potentially belonging to class lists, each associated with a teacher.

The only bits that didn't make it in during the sprint are the reporting bits.

I'm about to show a demo to a group of teachers (in about 1 hour) and again some time tomorrow or Thursday to another group. Hoping to get some feedback from that.

04 October 2011

Sprint: Structure and User Management

The time has come for the next round of all-out coding to push the exam practice system to the next stage of usefulness. The goal is to make it possible for a small number of teachers to have their classes practice for the upcoming exams, and to monitor their students' progress. The target date for showing this v0.2 prototype to a group of teachers is 18 October — exactly 2 weeks from now.

List of features to implement:
  • Structured format for questions. The question templating is still too haphazard and not all question metadata is stored in a way that's easy to extract and process. The logic of questions will still be specified using Python. The question presentation and metadata will go into XML.
  • Hierarchical labels for questions, to make filtering manageable. The tags will be along the following dimensions: topic (e.g. Physics::Mechanics::2d motion::Projectile motion), concepts (e.g. interpreting linear graphs, quadratic equations, rearranging equations), grade level.
  • Many more questions. The structure format should make it easier for other people to help add templated questions. I should focus on whatever will be covered in the grade 10 final exams for mathematics and physics.
  • Teacher and student accounts. Each teacher can set up a class list of students.
  • Reporting to students: total problems completed, percentage answered correctly overall and filtered by topic, ranked list of topics that need practice.
  • Reporting to teachers: same as students, but at both individual student level and average class level.

P.S. The project now has a name: Monassis.

28 September 2011

System goes live!

Version 0.1 of the software is ready for testing by the pilot user group. This version implements:
  • A web UI running as a Python Pyramid application.
  • 6 templated questions that showcase the following functionality:
    • Random generation of values for the question.
    • Graphs and plots generated with those values.
    • Scaffolded questions, where users answer subquestions and get feedback in stages.
  • User tracking: the system records all responses that users gave to questions. No reporting is currently done, but this is on the to-do list.

High on the to-do list are:
  • Adding more templated questions.
  • Categorizing questions based on a curriculum-aligned topic tree. This is critical for the future of the project, where analysis of users' responses to questions and how it relates to users' understanding of the concepts underlying said questions will play a leading role.
  • Providing some kind of reporting to teachers and students. It's not clear yet what this should look like, but a first draft will hopefully emerge from the pilot.
  • Switching to structured XML storage of templates. Templates are currently a bit ad hoc and with an eye on the future, it makes sense to XML the presentation of questions asap. The internal logic will stay in Python for now as I'm not yet convinced that MathML is capable of expressing it.

19 September 2011

Project description

I'm developing software that will allow learners to practice exam questions through a browser or their mobile phone. This is only the outside layer of the project though. What it is really about is mapping learners' success at answering questions to their understanding of the concepts underlying those questions. By knowing with which concepts learners are struggling, the system can then do useful things like
  • providing more practice on the types of questions with which the learner is struggling;
  • recommending revision material to the learner from freely available educational resources (for example, see the Free High School Science Texts);
  • providing feedback/reports to learners (and possibly teachers and parents) about their progress and about the specific concepts to which they should pay more attention.
All of the above is done per individual learner. In this sense the system is delivering a customised practice and revision schedule for each learner.

How will it work?
Templated questions   Each test item (question) is described as a template. A template is a generalisation of a particular question, where the details of the question -- such as numbers and some of the structure -- can be different in each instance of the template. For example, a template for solving quadratic equations with integer roots might produce the following set of questions.
Solve for \(x\). Write the two solutions, separated by a comma. Example: 2,-3
  1. \(x^2 - 5x - 6 = 0\)
  2. \((1-x)(10-x) = -8\)
  3. \(x^2 - 21 = -4x\)
Note that the numbers and factorisation of each question are different, while the overall structure (quadratic with integer roots) is the same.

Concept map   Test questions are designed to test subject-specific concepts. Furthermore, concepts typically require prior understanding of other concepts — covered earlier in the curriculum — for proper understanding. By incorporating the question-to-concept and concept-to-concept dependencies into the system, it can infer with which concepts learners are struggling based on their ability to answer test questions.

Inference   This is where all the clever stuff happens. Based on the mapping of questions to concepts, the data collected of learners' responses to questions, and some mathematical modelling, the system estimates each learner's proficiency at each of the concepts in the curriculum. This information can be reported / visualised to give learners and teachers an indication of mastery and problem areas. More importantly, knowledge of a learner's mastery of concepts allows the system to provide targeted review material or additional practice for his or her problem areas.

16 September 2011

Update: Templates and web framework

Templating has turned out to take more time than I had expected. It is not just a matter of replacing one set of numbers in a question with another. For example,
\[2x^2 - 3x - 5 = 0\]
is mathematically equivalent to
\[(1 - 2x)(1 - x) = 6\]
but requires more understanding/work in terms of manipulating polynomial equations to solve. Perhaps I am trying to make them too general, but I think templates should be able to encode and generate these structurally different, but mathematically equivalent questions.
So far I have 3 templated questions, implemented as Python classes, each with a generate_question and a test_response method.

I've also been learning the Pyramid web framework for Python. I'll use this to build the front-end for the online assessment pilot. The project is very well documented. I wouldn't recommend it to someone with no web app development experience, but the barrier to entry is pretty low.

14 September 2011

Frameworks vs libraries

Wow, this is the most friendly and concise description that I've ever read. In less than a minute it cleared up almost every point of confusion I had about web frameworks. Sourced from the Pyramid documentation:
A framework differs from a library in one very important way: library code is always called by code that you write, while a framework always calls code that you write. Using a set of libraries to create an application is usually easier than using a framework initially, because you can choose to cede control to library code you have not authored very selectively. But when you use a framework, you are required to cede a greater portion of control to code you have not authored: code that resides in the framework itself. You needn’t use a framework at all to create a web application using Python. A rich set of libraries already exists for the platform. In practice, however, using a framework to create an application is often more practical than rolling your own via a set of libraries if the framework provides a set of facilities that fits your application requirements.

13 September 2011

Sprint: Creating an exam drill system in 2 weeks

In 45 days the first paper for the national senior certificate exam will be written by grade 12 learners across South Africa. That gives me 2 weeks to create a web-based system that will allow learners to practice their exam skills so that they can actually use the system for the last month.

There are, of course, larger goals and I'll write more about these later. In short, this is a really good opportunity to get a bunch of grade 12 learners to provide feedback on an automated assessment and progress tracking system that we can roll out on a larger scale in future.

  • Templated questions
    • process past papers
    • develop templating language (Python + LaTeX/AsciiMath/MathML + AsciiSvg)
    • build concept map from FHSST TOC and link questions to concepts
  • UI: web-based, using Python + Pyramid + MathJax + AsciiSvg
  • Back-end:
    • Templated question database + API, eventually morphing into the FullMarks API
    • User account management, user history tracking. The user history will be vital in future, for finding correlations between user performance on various concepts and for suggesting revision material based on a user's performance.
  • Community feedback
    • The purpose of the pilot is to build a better system for the following school year.
    • Q&A forum where I can address their issues, and they can help one another. Use
    • Notify users of new features or questions on the system.
    • Explicitly solicit post-exam user feedback on how to improve the system.
  • Hosting (Siyavula, Rackspace?), branding, domain. Maybe the last two are not so important for now.