The Growth Bonus: Rewarding Improvement While Maintaining Academic Standards

Two students submit essays that both receive a score of 75.

At first glance, their performance appears identical. But the stories behind those two scores may be very different. One student might have scored a 74 on the previous assignment—essentially maintaining the same level of work. Another might have improved dramatically from a 60.

In both cases the essays themselves may be similar in quality. Yet one student clearly demonstrated substantial learning along the way.

This raises an interesting question for teachers: should grades reflect only the current piece of work, or should they also recognize improvement over time?

In many courses, particularly those that emphasize writing and analytical thinking, improvement is an important part of the learning process. Students revise strategies, incorporate feedback, and gradually strengthen their arguments and use of evidence.

To recognize that progress without distorting the meaning of grades, some assignments may include what we call a growth bonus.

The idea is simple: meaningful improvement deserves recognition—but the quality of the current work must still matter most.


How the Growth Bonus Works

The growth bonus uses a mathematical rule that compares the current score with a previous comparable assignment.

Three values are involved:

R – the raw score on the current assignment
B – the score from a previous assignment
T – a readiness target representing strong course-level work (often around 82)

The adjusted score is calculated as:

Adjusted = max(R, R + 0.8 × max(0, R − B) − 0.2 × max(0, T − R))

In plain language, the formula does three things at the same time.

First, it rewards improvement from the previous assignment. If a student improves by ten points, most of that improvement is reflected in the adjustment.

Second, it moderates extremely large score jumps when the current essay is still below the level expected for the course. This keeps the adjustment from turning a developing essay into a top-tier score.

Finally—and importantly—the formula guarantees that the adjusted score can never be lower than the original score.

The growth bonus can help a score. It cannot hurt it.


A Quick Example

Suppose a student scored 61 on a previous essay and 72 on the current one.

The improvement is:

72 − 61 = 11

Most of that improvement is rewarded:

0.8 × 11 = 8.8

Because the essay is still somewhat below the readiness target of 82, a small moderating adjustment is applied:

0.2 × (82 − 72) = 2

The adjusted score becomes:

72 + 8.8 − 2 = 78.8

The student’s improvement is recognized, but the final score still reflects the level of the current work.


What Happens If the Score Declines?

If the new score is lower than the previous one, the improvement term becomes zero. In theory the formula could produce a slightly lower number—but the rule

max(R, …)

ensures that the final score never drops below the original score.

In practice, this simply means the raw score stands as it is.


Why Not Just Use Standardization?

This approach adjusts scores based on the statistical distribution of scores in the class.

A simplified version of the formula looks like this:

Standardized score = ((R − μ) / σ) × s + m

Here:

R is the raw score,
μ is the class average,
σ is the standard deviation,
and the constants s and m determine the new spread and average of the scores.

Standardization can be useful when a test turns out to be unusually difficult or unusually easy. However, it measures performance relative to the class rather than improvement over time.

In some cases it can also produce surprisingly large adjustments. A raw score in the low seventies might become a ninety simply because the class average was low.

The growth bonus approach focuses instead on learning progress—recognizing students who improve while still keeping grades tied closely to the quality of the work itself.


Why the Readiness Target Matters

The readiness target used in the formula—often around 82—represents the level of performance typically associated with strong work on AP-style writing rubrics.

It is not a passing threshold or a minimum expectation. Instead, it serves as a reference point that helps keep score adjustments realistic.

Students who are already writing at a strong level will see modest adjustments. Students who are improving rapidly will see more noticeable ones.


The Larger Goal

Ultimately, the purpose of the growth bonus is not to inflate grades. It is to encourage the kinds of behaviors that lead to real academic progress: revising writing strategies, strengthening arguments, integrating evidence more effectively, and improving clarity and precision of language.

Grades should communicate meaningful information about learning. They should reflect both where a student stands today and how far that student has come.

The growth bonus is one way of recognizing both.

Precision in Assessment: Why Standardization Outperforms the Traditional “Curve”

In secondary and post-secondary education, teachers often face a “measurement gap.” This occurs when a highly rigorous assessment—such as a mock professional exam or a complex technical project—yields raw scores that accurately reflect performance benchmarks but fail to align with the broader institutional grading scale.

To bridge this gap, many educators rely on a “curve.” However, traditional curving often lacks statistical validity. Standardization, specifically through the use of Z-scores, offers a more mathematically sound and equitable alternative.

The Limitations of Common “Curves”

The term “curve” is frequently applied to two common but flawed methods:

  1. The Flat-Point Addition: Adding a set number of percentage points to every student. While “fair” in its uniformity, it does nothing to address the variance or “spread” of the scores.
  2. The Ceiling Curve: Adjusting the highest score to 100% and shifting others accordingly. This makes the entire class’s grades dependent on a single outlier, which can lead to volatile and inconsistent results.

These methods are essentially “band-aids” that fail to account for the relative performance of the cohort.

The Logic of Standardization (Z-Scores)

Standardization treats a set of scores as a distribution. By converting raw scores into Z-scores, we determine exactly how many standard deviations a student’s performance sits above or below the group mean.

The formula for calculating a Z-score is: z = (x – μ) / σ (Where x is the raw score, μ is the mean, and σ is the standard deviation.)

Once we have the Z-score, we can “re-map” it onto a target distribution (such as a school’s historical GPA mean). This ensures that a student who performs at the 90th percentile on a difficult assessment is rewarded with a grade that reflects that 90th-percentile standing in the gradebook.

Why Standardization is the Professional Choice

  • Maintains Rubric Integrity: Educators can grade with extreme rigor against high-level standards without fear of destroying a student’s GPA. The raw feedback remains honest, while the gradebook remains fair.
  • Corrects for Assessment Difficulty: Not every test is of equal difficulty. Standardization automatically adjusts for a test that was “too hard” or “too easy” by focusing on the student’s relative mastery within the cohort.
  • Statistical Defensibility: If a grade is challenged, the educator can point to a transparent, mathematical process based on the class distribution rather than an arbitrary “bump” in points.

By adopting standardization, we move away from “adjusting numbers” and toward “aligning distributions.” This practice respects the data produced by the assessment while ensuring that the final grade accurately reflects a student’s standing within the academic environment.

✨ Let the AI Teaching Assistant Help you Generate Questions to Embed with Video and PDF.

Need comprehension or analysis questions for a PDF or a video? That can be incredibly time-consuming — but Innovation’s Teaching Assistant is here to help!

Just open the Étude app. Upload your PDF or paste your video embed code, then add it to your AI request configuration. In seconds, you’ll have high-quality questions based on your stimulus, crafted in the language and level of sophistication you choose.

Check it out and see how much time you’ll save!

✨ How To Score Writing Tasks Using AI

The AI Grading Assistant integrated into Innovation is a powerful tool designed to streamline the assessment of student writing tasks.

With just a click, you can apply one of the pre-installed rubrics or upload and use your own custom rubric. After a brief processing time, you’ll receive a detailed second opinion to help you balance and validate your own evaluation of the student’s work. The AI’s assessment is based both on the selected rubric criteria and on the advanced capabilities of a large generative AI model. As of this writing, Innovation uses GPT-4o for essay scoring, ensuring fast, consistent, and thoughtfully reasoned feedback.

✨ Make a Jeopardy Game with Innovation’s AI Integration!

My students have always loved playing Jeopardy! Oh, sorry, trademark issue… I mean “Jeopardy-like trivia games in class”.. 😏

Our game is called “Ventura”.

Innovation has had a fantastic app for generating such games for years now. As part of our integration of AI into our whole system, teachers can now employ our AI teaching assistant in generating Jeopardy games!

Just like for creating test questions, teachers configure the request to OpenAI in the Ventura game.

Use the teaching assistant to generate questions. Use them as-is or edit them. Add images or audio clips!

Holy cow, I remember the old days back in the 1990s when I would use PowerPoint to make a Jeopardy game for review day. It took a really long time to enter all the questions and answers even when I had a template game prepared!

Now I can make a game in 2-3 minutes! The test generator is using one of OpenAI’s contemporary models, so you can rely on the question quality.

Enjoy!

Student Random Call-on App

In my current situation teaching part-time as a retiree remotely, I do find it useful to call on students in remote classes. Keeping students engaged in the lesson in a virtual class is a high priority for my attention during a lesson. This is perhaps moreso than in an in-person situation. I think it’s in the nature of digital devices with their many distractions and also due to the limitations placed on human interaction through these tiny windows!

When I am teaching new vocabulary to my French students, I like to use Innovation’s flashcard app. I use this all the time, especially in my beginner level French classes. The app allows me to execute a number of instructional operations: I can show the word, show the meaning, shuffle the word, save out only those words that are problematic for review of a narrower list, practice from definition to term or from term to definition. It really is very flexible.

Now, Reader, in one online high school I work for, all my lessons are one-on-one. So, using the flashcard app is really easy: I share my screen and conduct the instruction.

But teaching to a remote class, even as small as eight students, offers a challenge to maintaining engagement and attention. Last week, I was trying out a new strategy that turned out to work very well. The instructional context is a group of eight students in an AP French class. I needed to teach vocabulary using direct instruction. Here’s what we did: I showed a new term and pronounced it several times. next, I randomly called on a student to repeat and pronounce. then I showed the word’s meaning, then randomly called a different student to type in the Zoom chat to only me the meaning. This protected them from any embarrassment if they got it wrong, although the exercise is set up to be so easy as to limit that possibility. After the session, I sent them a link to a little quiz. The whole thing took about 15 minutes for ten words.

But I was not really great at calling on all students evenly. Some faces were hidden in the way Zoom displays them, so some students did not get called on as much.

There’s a new application now at Innovation that helps teachers to randomly select the next student to respond. It is installed in two places at present, in the main dashboard on the right and inside the flashcards app.

It’s very simple to use. In the flashcard app, click the “Call on Random” button on the left. On the right will appear a simple form. You type in the names, save them, then just click “Select random student”. Voilà! Your next participant!

The app randomly selects a student from the list and then removes them so they cannot be called again until everyone else has been. You can update the list any time.

Look for the random call app to be installed in a number of other places at the site, such as the improvised dialogue app.

Monitoring Student Progress in Real Time

Innovation has always developed in response to authentic, practical instructional needs of students and teachers. In retirement, I am enjoying teaching part-time remotely and this continues to inspire new apps and coding enhancement.

You know, Reader, if you take a good look at what you are using to teach in digital spaces, you may observe like I did that a lot of it is software originally designed for office workers. Word processors, spreadsheets, presentation software and the like: these were made for adults doing largely self-directed work in office work. We are so accustomed to these apps that we hardly realize that they don’t ever quite exactly fit for us in the classroom; that we are always creating modifications and work-arounds to make them work. And we get by…

21st century learning spaces, a paradigm often expounded here at this site, are virtual workspaces that really “fit” secondary instruction in ways that office productivity products do not. Let’s address monitoring student work.

One of my classes this year is an AP French class down in Texas. My objective was to teach them a new grammar point. During our in-class practice, I needed to be able to monitor their work while they were doing it.

Reader, you may already be familiar with Innovation’s grammar learning app. Students learning world languages benefit from practice transforming and generating utterances from prompts. The app meets this need by providing a digital learning space that is interactive. An algorithmic AI lets students know how close they are to the answer, for example, and the instructor can transform the content into a “live session” in which students participate in real time much like the famous Kahoot! game.

Innovation’s grammar app.

Adolescents can sometimes be distractible. In an in-person classroom, I have reasonable observational capacity to notice and redirect distracted students. In remote teaching, this requires some additional effort. What if I could see the student’s’ progress in real time as they worked?

Screenshot of a “live session”, an interactive space where the teacher can pose prompts and students respond in real time interactivity.

People learning new things can sometimes make mistakes. In an in-person classroom, I can wander the room and peer over students’ shoulders. I can try to catch mistakes as they make them and offer correction in a more immediate way. It’s a shame to have to wait a day or two before addressing writing errors. Immediate feedback is more effective so that the other practice examples go well and inculcate the correct syntax. What if I could peer over everybody’s virtual shoulders while they practiced their new writing skill?

The monitor app is now installed at Innovation’s short answer and world language composition tasks. It allows the instructor to view all of the students currently with any saved work on the task. Click the student name, and the instructor can see their work in real time (well, there’s a ten second lag for technical reasons). This work is refreshed every ten seconds. In the short answer monitor, the number after each name tells how many responses they have saved.

In situations where the teacher may wish to share the screen with the class, they can hide the student names and, for the short answer tasks, hide the correct answers.

The monitor, set up for a short answer task, showing students anonymously when needed.

The way I like to use this is as follows: I use two monitors. Monitor 2 is shared with students. I can set the names to “Anonymous” and share the monitor. I select students at random from time to time to check their progress. I may focus on someone who is behind. I may focus on someone I know needs more support (I can see the names before setting anonymous). In monitor 1, on the Zoom or Teams call, I can use the chat to message students corrections, suggestions, redirections if they appear off task, and so forth.

the monitor app, hiding the correct answers in short answer tasks when needed.

To activate the monitor, scroll to the activity in your dashboard course playlist. You’ll find “Monitor Class” in the task dropdown. Monitor is installed for short answer and composition tasks at present. While you are wandering around the site, why not visit our newly opening shops? You can purchase my own activities, PowerPoints, and DBQs for social studies.

Adjusting Assessment Scores: Why and How

When it comes to grading, scores are often reported on a simple 0-100 scale. But, in many cases, it’s better to adjust those scores to make sure they truly reflect how well a student has mastered the material. This adjustment process is often referred to as normalization, and one common way to do this is through a method called z-score standardization.

What is Z-Score Standardization?

Imagine a group of students who took the same test. Some students might have performed really well, while others might have struggled. If we simply average all the scores and compare them to a fixed passing threshold (like 70%), it wouldn’t be fair to those students who performed well beyond the average. Z-score standardization is a way of adjusting scores so that they fit a more accurate and fair scale.

How it works:

Z-Score Calculation: The z-score tells us how far a student’s score is from the average score, measured in terms of standard deviations (which is a fancy way of saying how spread out the scores are). A positive z-score means the student did better than average, and a negative z-score means the student did worse than average.

The formula for calculating a z-score is:

Adjusting Scores: Once we calculate each student’s z-score, we can adjust their scores to match a more standard scale. This is done by applying the z-score to the mean (average) and standard deviation of the group’s scores. The new score is calculated as:

This formula uses the student’s z-score to adjust the score based on how far it is from the group’s average.

Why Do This?

  1. Fairer Grading: By adjusting for how scores are distributed (e.g., a test with a very easy or very hard question), the scores become fairer, especially when comparing students across different groups or assessments.
  2. Removing Bias: Sometimes, individual test questions are biased or poorly written, affecting how students perform. Z-score standardization helps eliminate that bias by focusing on the overall performance of the group.
  3. Outlier Handling: The method also takes into account “outliers” (e.g., one or two students who either do extremely well or very poorly). These outliers can skew results, so they’re filtered out to make the adjusted scores more reliable.

What Does This Look Like in Practice?

Let’s say a student scores a 90 on a test, but the average score for the class is 75, with a standard deviation of 10. To calculate the z-score for the student, we use the formula:

This means the student’s score is 1.5 standard deviations above the class average.

Next, we use the z-score to adjust the student’s score. If we want to bring the class to a higher standard (let’s say the target mean is 80), we use the formula for adjusting the score:

So, the student’s adjusted score is now 95, reflecting their performance in relation to the class and the new target.

Z-score standardization is often mistaken for “curving” scores, but they are fundamentally different. Curving typically involves adjusting all scores on a test so that the highest score becomes a perfect score, or the average score is raised to a certain target (like 70%). This method can unfairly benefit some students and disadvantage others. In contrast, z-score standardization adjusts individual scores based on how far they are from the class average, ensuring that each student’s performance is evaluated relative to the entire group, not a fixed threshold. By considering the spread of scores (standard deviation) and handling outliers, z-score standardization provides a more accurate reflection of a student’s performance, removing the arbitrary nature of curving and offering a fairer and more statistically sound approach to grading.

Innovation makes it incredibly easy for teachers to adjust and standardize assessment scores with our powerful, user-friendly tool. By using z-score standardization, our app helps teachers fairly align scores to a standard scale, taking into account the unique distribution of each class’s performance. With automatic outlier detection and score adjustments, teachers no longer need to worry about arbitrary curving or biased grading. It’s an efficient, data-driven solution that ensures every student’s performance is evaluated accurately and equitably, all with minimal effort on the teacher’s part.

Can vocabulary knowledge predict content knowledge?

Can Vocabulary Knowledge Predict Content Knowledge? Unveiling Insights from Classroom Practice

Encountering a scholarly paper delving into curriculum-based measures (CBM) for content area secondary courses like social studies ignited my curiosity. Eager to implement and extend their research, I embarked on a journey within my own classroom. As an educator committed to maximizing my students’ potential, I aimed to investigate whether vocabulary knowledge could serve as a predictor of content comprehension. Through practical application and careful observation, I sought to unearth valuable insights to refine my teaching methodologies. Here’s what unfolded during this intriguing exploration.

The intersection of vocabulary and content knowledge has long been a subject of interest in education. While vocabulary is recognized as a fundamental component of academic proficiency, its role in anticipating students’ understanding of intricate subject matter remains a matter of debate. The paper I encountered proposed that assessing students’ vocabulary knowledge through CBM could offer valuable insights into their grasp of content area material, particularly in disciplines like social studies characterized by specialized terminology and concepts.

To test this hypothesis, I integrated vocabulary-focused CBM into my social studies curriculum, meticulously tracking students’ progress across multiple units. I developed targeted vocabulary assessments, quizzes, and assignments tailored to evaluate students’ familiarity with key terms and concepts relevant to each unit of study. Additionally, I incorporated vocabulary-building exercises into classroom activities, discussions, and readings to reinforce students’ comprehension and retention of subject-specific terminology.

Through continuous assessment and analysis, intriguing patterns began to emerge in students’ performance. Those exhibiting proficiency in vocabulary consistently demonstrated a deeper comprehension of the content material. They showcased their understanding by articulating complex ideas, drawing connections between different topics, and applying their knowledge in diverse contexts. Conversely, students grappling with vocabulary challenges often struggled to grasp the underlying concepts and themes presented in the curriculum.

One significant revelation was the predictive capacity of certain high-utility terms in gauging students’ overall content mastery. Terms acting as linchpins or conceptual anchors within the curriculum correlated strongly with students’ performance on unit assessments and projects. By prioritizing the teaching and reinforcement of these critical vocabulary terms, I could scaffold students’ learning and facilitate deeper engagement with the subject matter.

Moreover, I observed that vocabulary instruction served as a gateway to content proficiency, enabling students to access and comprehend complex texts, primary sources, and multimedia resources more effectively. Equipping students with the linguistic tools to decode and interpret content area material empowered them to become more independent and self-directed learners.

However, it’s crucial to acknowledge the limitations of relying solely on vocabulary knowledge as a predictor of content understanding. While vocabulary forms a foundational aspect of academic literacy, it should be viewed as part of a broader assessment framework. Factors such as background knowledge, cognitive skills, and socio-cultural influences also significantly influence students’ learning experiences and outcomes.

In conclusion, my exploration into the relationship between vocabulary knowledge, CBM, and content understanding provided valuable insights into student learning dynamics within the social studies classroom. While vocabulary instruction can undoubtedly enhance students’ comprehension and retention of subject matter, it should complement a comprehensive approach to teaching and learning. By integrating targeted vocabulary CBM with engaging content-based activities and assessments, educators can create enriching learning experiences that foster deep understanding and critical thinking skills in students.