|the Boston area developed it. At the
time, I was Professor of Business Data Processing at Miami-Dade Community
College. While running the 1st Class Fusion demo programs I realized
that the expert system shell was similar to a computerized multiple-choice
test. I quickly acquired a Fusion developer's license, and started
computerizing the multiple-choice tests I had developed over the years.
I was the first professor, in a very large five-campus college, to
computerize all of my exams. For this achievement, I won a League
of Innovation award. Actually, this was a cheap trick since I employed
expert system shell technology to quickly computerize my multiple-choice
tests and not for its technical excellence.
Illustration of Simple Multiple Choice Exam
For the purpose of illustration, here is a simple three-
question test expressed in a modern knowledge based system: Knowledge
Builder from Attar Software. It expresses the logic for the test's
execution flow as a graphical decision tree. One creates tests by
drawing decision trees and maintains these tests by editing these
trees. Today, anyone with basic computer literacy skills can draw
and edit these decision trees. An "inference engine", the
run-time component of the knowledge based system, traverses these
trees displaying dialogs, performing calculations, and keeping records
in the background as well as generating reports.
This system starts with a Login procedure, which
initializes the test and sets up record keeping. The system then displays
a dialog for question 1 (Q1 in Figure 1), a multiple-choice question,
which may be in any form: a windows dialog,
||an HTML page, etc. After collecting the
input (a, b, c or d) from this dialog, the test system moves forward
through the tree. If 'a' was the answer to Q1, then the system runs
the Score procedure, move through Label_1 and onto Q2. If the student
answers with 'b', 'c', or 'd', the system moves through Label_1 on
to Q2, skipping the Score procedure. The system progresses through
the tree arriving at the Report node, where the system completes additional
record keeping and reporting.
Discovering Assessment Driven Adaptive Tutorial
I soon realized that creating multiple-choice tests with
an expert system shell ignored most of the shells important features
and advantages. For example, in a simple, linear, computerized, multiple-choice
test, you move from question 1 to question 2, etc., until you reach
the last question. However, by using an expert system shell, the answer
to the first question can determine the selection of the second, and
how you answer the second question can determine the selection of
the third and so on. Finally, you reach one of many conclusions -
a result of how you answered the questions presented to you. In most
cases, you reach a conclusion without seeing all of the questions.
This is very efficient, but how does it improve testing?
This suggests that you must design multiple-choice
questions differently. Previously, you had a question stem, a correct
answer and a set of wrong answers (distracters). You were only interested
in whether the student answered the question correctly or incorrectly.
This produced a score, which when combined with scores from other