EduTech from a Teacher-Coder: Engagement Without the Game

How to create meaningful, real-time engagement with a workflow that’s simple and actually usable in class

Whether you teach remotely like I do or are working in-person, you know that student engagement in the lesson is a paramount concern. It is important that students not be passive recipients very often or for long periods.

Gamification enthusiasts and coders who have not taught middle school seem to often believe that the answer is to make studying something more like an XBox adventure. Add music, competition, points and tokens and they will learn without even knowing it!

But I want my students’ cognitive load carrying the lesson, not the rules of the game or the points they earned or the banter with the other team. To this end, I developed “live session” interactive versions of many of the Innovation apps.

The workflow goes like this: the instructor starts a host instance of the activity, copies a special participation link and send it to students, who then get a screen for interacting. Live sessions turn the activity into an interactive activity that fosters engagement through inquiry, curiosity, discussion, debate, reinforcement.

I use the TestApp and the Étude live sessions to debrief after a test or to review for tests. the teacher screen displays the questions one at a time. The teacher host opens the session to responses and closes after the time. Student responses are displayed anonymously for debriefing.

“Engagement isn’t just activity—it’s thinking.” 

I use the Grammar app live session in my French classes. I can display the prompt to the screen, open for student responses, they then submit their work and I can display anonymously for debriefing. This is exactly the same as the assignment, just displayed in a different interactive form.

The Media powerpoint application I use most often for teaching social studies and for my advanced French courses where I am delivering content. This is a very powerful and flexible application that will be discussed in detail in a later post. Suffice it to say for now that the media live sessions have all the tools we need to get brief and extended student replies and reactions, from short answer to multiple-choice and even a selection of emojis!

One of my students remarked that the live sessions were kind of a boring Kahoot! I laughed and replied that was the intention! No points, music, sound effects, rankings, scores, goofy animations. The focus is on the lesson. If anything is to be entertaining, it’s going to be me!

EduTech from a Teacher-Coder: Restoring the Teacher’s Line of Sight

For about a decade, classroom technology quietly broke something important.

Teachers lost their line of sight into student work.

I don’t mean theory—I mean the simple ability to know what students are actually doing.

Some call this “command and control,” but that misses the point.

What teachers actually need is simple: the ability to know what students are doing, in real time, so they can guide and support them.

We need the old fashioned line-of-sight supervision and guidance that instructors maintained in effective classrooms in the ages before every student got a ChromeBook with a pile of office productivity software. I knew exactly what my students were doing as I circulated the classroom. I could look over their shoulder and contribute advice to a forming essay. I could redirect students who found something off-task more interesting to do. I could ensure with some reliability that no cheat sheets were being used on tests and that students were doing their own work. I was able to keep the class workflow moving so we didn’t fall behind with delays and procrastination.

Then came ChromeBooks whose screens we could not see or were easily hidden. With that came office productivity tools designed for mature adults in paying jobs who were motivated for the most part to get their work done. Ironically, tools designed for productive adults often made classrooms less productive.

There are a number of expensive software on the market now for monitoring student screens. At my last district, we had a product that let us monitor everyone’s tabs. But I really don’t think my own workflow is much improved by surveilling a dozen tiny screencasts.

I’m retired now and I teach remotely a few hours a day. I need more than ever to know know exactly what my students are doing. It is important to maintain the pace of the lesson and to ensure assessment integrity. This post’s “EduTech from a Teacher-Coder” is the monitors and proctors in all of Innovation’s apps.

Monitor

Every application at Innovation comes with a monitor to display in real time how students are progressing on their task. The test monitor shows what question students are on and even has a messaging feature so I can quietly post notifications to students in their test. The writing app monitor displays the current essay for each student, the number of words, their use of any AI licenses. Vocabulary quiz, sorting app, the “KnowWhere” map study, cause and effect study, reading comprehension, cloze app, ordered list, forum, even the AI chat application can display student progress and often their work product. The monitors all hide the student names as an option so that teachers can display the monitor on shared screen or in front of the classroom as a way to remind everyone to keep pace.

Monitors let teachers see the correct responses for many activities. The monitor returns an important feature of command and control of the classroom: I need to know exactly what they are doing.

Proctor

The proctoring feature is extensive throughout all of the activities. Proctor is an after-the-fact kind of analysis and proctors come with AI interpretation and summary features. When did they start the app? How long did they spend on each question? Did they leave the screen? Paste in any text? Try to right-click and use a spellchecker or AI assistant not licensed?

Common thoughts on giving assessments in remote teaching are that it is not reliable. But if there is a strong AI-assisted proctor running during the assessment and there is an adult supervising in the room, we can be assured of an assessment result as reliable as old fashioned in-person classes.

Teacher Command and Control Supports Successful Student Outcomes

When the proper guardrails are in place, guardrails we have always had in teaching, then we can be more assured of delivering the kind of high quality, effective training that leads to student success. A dozen applications at Innovation include monitoring and they all include proctoring.

For years, we handed students powerful tools…
and took away the teacher’s ability to see how they were being used.

That was the mistake and now we’re correcting it.

A Better Way to Assign Short Student Presentations Across the Curriculum

When I started teaching in 1991, the highest level of technology in my class was my pocket calculator. Supervision was a matter of circulating the room to make sure students were engaged.

When technology became part of our schoolrooms, we had to surrender a lot of the supervision that we used to have. Students could now hide behind ChromeBooks or click away quickly when we walk by and easily become off-task and disengaged. The main reason for this was that the first technology solutions were designed for offices, not for classrooms. We thought this was a great idea, since many students would one day in the workforce be using such applications.

We were wrong about that.

Software designed for adults, for office workers and designers, is not appropriate for most classroom settings simply because it does not have the guardrails and monitoring that we used to have in pre-EdTech days.

Yes, we worked around it. We added internet filters, screen monitoring software, and the like. But that is not the same as having direct observation of our students and control over their workflows.

Many efforts to create truly classroom-friendly EdTech have focused on “gamifying” learning. Developers believed in the old trope that you could trick them into learning if they were having fun. Don’t get me started on that…

The problem I wanted to address in this post occurred in a remote AP French class I was teaching. The remote platform was Canvas. The assignment was to produce a 2-minute video presentation in French, mostly improvised, to model how the task was set up on the AP exam. The students dutifully uploaded their little videos to Canvas and it was obvious that they were reading prepared scripts and they had either an AI either do the work or correct the work. I knew from class sessions that they were not capable of that level of language proficiency and anyone watching could see they were reading.

How does one rationalize giving a high stakes grade for that?

EduTech Solution from a Teacher-Coder

Presto is an application at the Innovation platform that resolves the issue of students having AI-generated presentations and scripts without real learning or synthesis. While originally devised as an evaluation tool for world language learners, it is extremely effective in content area classes like social studies.

Students log in and are redirected to the assessment. After setting their camera and mic and starting the camera, the task begins. Only now can they see the prompts. There is a strict timer and an AI-enhanced proctor records their engagement and activity on the page. There is a time limit. Once started, they need to finish or they must be readmitted by the teacher. This prevents viewing the prompts and then starting again after research.

The proctor provides the supervision we often lack in modern education software. The time limit and the coordination of camera activation with prompt visibility prevent cheating very effectively.

“AI has made scripted assignments meaningless. Presto measures thinking instead.”

More importantly, the structure encourages authentic thinking. Students must interpret the prompts and organize their ideas in real time rather than relying on pre-written scripts. Instead of reading polished AI-generated text, they must explain ideas in their own words within a clear time limit.

For teachers, this makes evaluation more meaningful: the focus shifts from detecting AI assistance to assessing a student’s ability to communicate understanding.

Students must interpret the prompts and organize their ideas in real time. Instead of reading polished AI-generated text, they explain ideas in their own words within a clear time limit.

For teachers, this changes the evaluation process completely. Instead of trying to determine whether a script was written by the student or by an AI assistant, we can focus on what actually matters: a student’s ability to communicate understanding.

And that was the goal all along.

The Growth Bonus: Rewarding Improvement While Maintaining Academic Standards

Two students submit essays that both receive a score of 75.

At first glance, their performance appears identical. But the stories behind those two scores may be very different. One student might have scored a 74 on the previous assignment—essentially maintaining the same level of work. Another might have improved dramatically from a 60.

In both cases the essays themselves may be similar in quality. Yet one student clearly demonstrated substantial learning along the way.

This raises an interesting question for teachers: should grades reflect only the current piece of work, or should they also recognize improvement over time?

In many courses, particularly those that emphasize writing and analytical thinking, improvement is an important part of the learning process. Students revise strategies, incorporate feedback, and gradually strengthen their arguments and use of evidence.

To recognize that progress without distorting the meaning of grades, some assignments may include what we call a growth bonus.

The idea is simple: meaningful improvement deserves recognition—but the quality of the current work must still matter most.


How the Growth Bonus Works

The growth bonus uses a mathematical rule that compares the current score with a previous comparable assignment.

Three values are involved:

R – the raw score on the current assignment
B – the score from a previous assignment
T – a readiness target representing strong course-level work (often around 82)

The adjusted score is calculated as:

Adjusted = max(R, R + 0.8 × max(0, R − B) − 0.2 × max(0, T − R))

In plain language, the formula does three things at the same time.

First, it rewards improvement from the previous assignment. If a student improves by ten points, most of that improvement is reflected in the adjustment.

Second, it moderates extremely large score jumps when the current essay is still below the level expected for the course. This keeps the adjustment from turning a developing essay into a top-tier score.

Finally—and importantly—the formula guarantees that the adjusted score can never be lower than the original score.

The growth bonus can help a score. It cannot hurt it.


A Quick Example

Suppose a student scored 61 on a previous essay and 72 on the current one.

The improvement is:

72 − 61 = 11

Most of that improvement is rewarded:

0.8 × 11 = 8.8

Because the essay is still somewhat below the readiness target of 82, a small moderating adjustment is applied:

0.2 × (82 − 72) = 2

The adjusted score becomes:

72 + 8.8 − 2 = 78.8

The student’s improvement is recognized, but the final score still reflects the level of the current work.


What Happens If the Score Declines?

If the new score is lower than the previous one, the improvement term becomes zero. In theory the formula could produce a slightly lower number—but the rule

max(R, …)

ensures that the final score never drops below the original score.

In practice, this simply means the raw score stands as it is.


Why Not Just Use Standardization?

This approach adjusts scores based on the statistical distribution of scores in the class.

A simplified version of the formula looks like this:

Standardized score = ((R − μ) / σ) × s + m

Here:

R is the raw score,
μ is the class average,
σ is the standard deviation,
and the constants s and m determine the new spread and average of the scores.

Standardization can be useful when a test turns out to be unusually difficult or unusually easy. However, it measures performance relative to the class rather than improvement over time.

In some cases it can also produce surprisingly large adjustments. A raw score in the low seventies might become a ninety simply because the class average was low.

The growth bonus approach focuses instead on learning progress—recognizing students who improve while still keeping grades tied closely to the quality of the work itself.


Why the Readiness Target Matters

The readiness target used in the formula—often around 82—represents the level of performance typically associated with strong work on AP-style writing rubrics.

It is not a passing threshold or a minimum expectation. Instead, it serves as a reference point that helps keep score adjustments realistic.

Students who are already writing at a strong level will see modest adjustments. Students who are improving rapidly will see more noticeable ones.


The Larger Goal

Ultimately, the purpose of the growth bonus is not to inflate grades. It is to encourage the kinds of behaviors that lead to real academic progress: revising writing strategies, strengthening arguments, integrating evidence more effectively, and improving clarity and precision of language.

Grades should communicate meaningful information about learning. They should reflect both where a student stands today and how far that student has come.

The growth bonus is one way of recognizing both.