In picking a test procedure, I got lucky. Really lucky.
It was the kind of luck that comes around every fifty years or so. I stumbled into a research experiment that allowed for rapid hypothesis changes, immediate feedback, and quick turnaround on results.
Don't yawn. To a researcher this is the mother lode.
If normal testing progress moved at the speed of a covered wagon, I found myself in a Lear Jet. With that speed and flexibility, it still took over a year to untangle the learning issues. But the results, they were spectacular.
When someone says they "got lucky," or uses the word "spectacular," you know you're in for a story. It's unavoidable.
In this case, the story began in a pizza parlor.
When the video games first came out, the local pizza parlor brought in a three consoles. The games were crude, but the kids lined up with steel glint in their eye and a tight grip on their quarter. They were serious.
I thought the fad would pass, but the games got more sophisticated and the owner added machines.
At every lunchtime, lines formed at the machines. The kids studied each other's moves and started scoring at the expert level. Stories of top scores and free games filled the room. For months. They became superstars of thumb twitching and dodging monsters.
If those kids brought that energy to the classroom, we'd have a cure for cancer.
Twenty years later, I believed that computer games had the potential to fill classrooms with the energy seen at video machine. By adding knowledge into the games, the same kids who squirmed during a math lesson could be transformed into serious students.
If I could capture this energy, dropout rates could disappear, and true education could move into every home. It was quite the vision.
The idea wasn't that farfetched. Game-based training had been around for some time and was proving itself valuable in decision-heavy simulations. The Army loved the combat simulators, and businesses were automating in-house training.
Computer games seemed like a perfect fit for teaching. They were patient, provided instant feedback, and offered repetition that no teacher could supply.
Most computer-based training, however, was boring. Point-and-click and point-and-shoot put me to sleep. The current crop of educational games provided feedback and allowed for re-tries, but it wasn't enough.
I wasn't blaming the technology computers had sound, color, images, and enormous flexibility in the software. The building blocks were available.
In the end, I blamed the teaching methodology for putting students to sleep. For goodness sake, these kids could shoot the same monster for hours, and we can't keep them awake with real science. That's bad.
I'd fix the educational computer games. That was my new mission. How hard could it be to add a few learning principles, and wham, the games would be interesting and educational?
I charged ahead. Which learning principles would make the difference?
As I didn't have much else to go on, I figured the academic literature was a good starting point. Unfortunately, reading through the Educational Psychology journals did not go well. The writing confused me. Many experiments seemed to miss the point; others didn't make sense.
A few techniques, however, were well accepted. The experts all recommended repetition and review. Visual presentations, although not completely tested, were another common suggestion.
My QA Roots
True to my training in the software test lab, my first inclination was to validate these theories before weaving them into the games. If they worked, great. If they didn't, well...I expected them to be okay.
What would be a good test? I needed someone willing to learn material using repetition and review. They'd then be tested for recall and performance.
I volunteered. After all, the test was mostly a formality, and I'd learn something new.
For the experiment to work the subject should be something difficult a topic I knew little about.
Chinese! That was it. I'd learn to speak Chinese.
I bought the latest books and the best tapes. The three thousand character alphabet, no problem. It would be a great test.
Two weeks into the study program and I admitted failure. My tongue wasn't handling the new sounds, and my lack of a Chinese-speaking teacher was proving to be a serious obstacle. Without a mechanism for immediate feedback, I couldn't tell right from wrong.
I began the search for a new learning endeavor. This time, a mechanism for immediate feedback would be required.
The following week I was flipping through the cable channels and noticed the Poker Tour games. The prize money certainly was impressive.
Poker! That was it. I'd learn to play Texas Hold'em Poker.
Poker had all the key elements. The game was complex; how-to books were available; and winning offered immediate feedback.
Granted, poker seemed like too much fun to represent a serious educational methodology experiment. Maybe the shadow of failure from the Chinese effort pushed me to trying poker; maybe it was the hint of mischief and the thrill of a treasure hunt.
It was decided. Learning poker was the test case.
I'm not going to make excuses. In fact, poker turned out to be a much better test environment than I expected. Computer simulated poker games allowed me to test new learnings against hundreds of hands. The mix of statistics, decision factors, and rapid play created a stressful environment. Feedback was close to immediate.
A good many people have and rolled their eyes at the mention of poker as test vehicle. Snicker if you must, but read through the results.
Texas Hold'em turned out to be complex, loaded with misdirection, and I was stumped much of the time. This very confusion, combined with the ability to adjust and retry approaches, led to the findings that were so unexpected.
No, I've haven't made a million yet. Let's look at the test results.
| AWSS Home | Previous Section | Next Section | Feedback: James Davis