AI at Innovation: Three Ways Our Tools Support Teachers

Artificial intelligence isn’t here to replace teachers—it’s here to make their work more efficient, insightful, and impactful. At Innovation Assessments, we’ve built AI into our platform in three carefully designed ways. Each of these functions addresses a different part of the teaching cycle: preparing lessons, evaluating student work, and monitoring learning behaviors in real time.

Let’s take a closer look.


1. Teaching Assistant: Generating Prompts, Tasks, and Test Questions

Teachers often spend countless hours preparing materials: prompts for writing, comprehension tasks, practice questions, or even entire quizzes. Our AI-powered Teaching Assistant helps cut that prep time by generating high-quality starting points:

  • Assessment & activity prompts: Suggests open-ended discussion questions, role-play scenarios, or practice drills tailored to your subject.
  • Test question generation: Builds multiple-choice or short-answer items aligned to your chosen level and category, whether it’s social studies DBQs, French language tasks, or science practice sets.
  • Adaptability: Because the generator accepts teacher input on topic, difficulty, and format, you still set the pedagogical direction—the AI just does the heavy lifting.

The result? More time to focus on pedagogy and less on busywork.


2. Grading Assistant: Scoring Short Answers and Longer Essays

Grading is where AI can provide meaningful support without ever removing teacher authority. Our Grading Assistant uses OpenAI’s models to analyze student responses and offer suggested scores or rubric-based comments:

  • Short answer scoring: Provides a confidence-scaled score (e.g., full credit, partial credit) with a rationale tied to your rubric.
  • Essay analysis: Surfaces structure, clarity, and argument strengths/weaknesses so you can give students faster, more targeted feedback.
  • Teacher control: Every score is a suggestion—teachers make the final call. AI never replaces professional judgment.

This approach reduces turnaround time and makes it easier to give richer feedback, even on assignments with dozens of responses.


3. Proctor Function: Analyzing Student Activity in Online Apps

Digital classrooms introduce new challenges: how do you know if students are fully engaged, struggling, or even drifting off task? Our Proctor Function gives teachers insight into behavior patterns during online interactions:

  • Session monitoring: Tracks student activity logs (e.g., navigation events, copy/paste, time away from page).
  • Pattern analysis: Uses AI to highlight irregularities—like frequent page exits during a quiz—or flag potential academic integrity concerns.
  • Formative insights: Goes beyond “cheating detection” by helping you spot disengagement, pacing issues, or moments when students may need extra support.

Think of it as a lens into classroom dynamics that’s hard to see in a virtual environment.


Why These Three?

We chose these categories—Teaching Assistant, Grading Assistant, Proctor—because together they cover the full arc of digital instruction:

  1. Before class (plan): generate engaging materials.
  2. After class (assess): provide consistent, fast feedback.
  3. During class (monitor): ensure students are active and supported.

Our guiding principle: AI should serve teachers, never the other way around.

Introducing Weighted Questions: The Smart Way to Grade Your Tests


Have you ever created a test where some questions were simply more important than others? Perhaps a single-sentence response question you intended as a quick check for understanding, and a more complex, multi-paragraph essay question that required a deeper analysis.

Or have you decided after the test that some questions need to be removed? Or correct the answer key and you need to recalculate scores?

With our latest update, you can now assign specific weights to each question on your tests, allowing you to create more nuanced and accurate assessments.

What’s New?

You now have the power to define the value of every question. When you’re editing your test, you can set the weight for each question, for example:

  • Multiple-choice questions worth 1 point.
  • Short answer questions worth 5 points.
  • An essay question worth 20 points.

Our new scoring system will automatically calculate the final score for each student based on the weights you set.

The Power of Re-Scoring

But what if you decide to change the weights after students have already taken the test? This is where the magic happens.

With the click of a single button, you can now re-score an entire class. Our new and improved algorithm will re-calculate every student’s grade, taking into account:

  • Updated Question Weights: If you change a question’s value, the scores will be instantly updated.
  • Answer Key Corrections: Did you find a mistake in the answer key? Correct it, and every student’s score will be recalculated.
  • Changes in Question Count: If you add or delete questions, the system will adjust the final score accordingly.

And it’s fast. This new re-scoring capability is built on a highly optimized system that can process hundreds of students and questions in a fraction of the time. We’ve even addressed edge cases, such as students who have had their old answers automatically archived, to ensure accurate and reliable results every time.

Our goal is to give you more flexibility and control over your assessments, so you can focus on teaching. Try out the new weighted questions feature today and see the difference.


NEW! AI Analysis of Proctor Notes (Student Engagement on Tests)

We’re excited to introduce a brand-new tool for teachers: AI Proctor Analysis. This feature takes the detailed proctoring logs collected during online assessments and automatically generates a professional, concise summary of student behavior—helping teachers spot issues faster and focus on teaching instead of sifting through logs.

How it Works

During an assessment, our system records digital behavior events such as page switches, text pasting, and other activity. These notes are stored securely in the teacher’s proctoring database.

With the new feature:

  1. Logs are gathered – For each student and test, the platform collects all behavior notes.
  2. Cleaned & organized – Duplicate or redundant entries are filtered so the report is readable.
  3. Analyzed by AI – The logs are sent through our secure AI integration. The AI is instructed to act as a strict test proctor, highlighting suspicious or irregular activity.
  4. Teacher summary – In just a few sentences, the AI generates a professional summary for the teacher, flagging potential problems and confirming if behavior was normal.

Why This Matters

  • Time-saving: No more scrolling through long behavior logs.
  • Professional tone: Reports are short, objective, and easy to share.
  • Enhanced oversight: Teachers get a clearer picture of digital test behavior at a glance.

Example

Instead of wading through dozens of raw log entries like:

Started task
Left page
Returned to test
Text pasted

The teacher sees a clear summary such as:

“The student briefly left the page twice and pasted text once. Behavior suggests potential use of outside resources. Recommend follow-up.”

Built with Security in Mind

  • Only authenticated teachers can access proctoring data.
  • Student activity logs are processed securely.
  • Every AI request is logged for accountability, including token usage and teacher identifiers.

The bottom line: With AI Proctor Analysis, you’ll spend less time interpreting logs and more time making informed decisions about your students’ online assessment behavior.


New Feature: Teacher Comments on Short Answer Questions

This week when giving pretests for AP French, a teacher found they needed the ability to leave notes directly on individual student responses. Today I rolled out a new feature that makes that possible.

What’s New

When reviewing a student’s short answer question, teachers will now see a “+ Comment” button alongside the scoring tools. Clicking this button opens a simple dialog where you can type in your feedback.

  • Comments are saved directly to the system, so they’re always there the next time you revisit the student’s work.
  • Each comment is linked to a specific student and specific question, so there’s no confusion about what the feedback refers to.
  • You can edit or delete your comment at any time.
  • An optional toggle lets you make the comment visible to the student, so it can be private teacher notes or shared feedback.

Why It Matters

This update is designed to give teachers more flexibility:

  • You can jot down quick reminders for yourself (“Check this student’s phrasing with the rubric later”).
  • You can leave direct feedback for students to help them improve.
  • You can build up a history of feedback that follows the student across sessions.

Keeping It Simple

I’ve worked to keep this feature as lightweight as possible:

  • Comments save instantly, no extra steps required.
  • The display is clean, with a simple box showing the comment, teacher name, and timestamp.
  • For students, if you’ve chosen to make a comment visible, they’ll see it in their results view — but private notes stay private.

New Feature: Send Real-Time Messages to Students During a Test

We just rolled out something that a lot of you have been asking for: you can now send quick, targeted messages straight to a student while they’re in the middle of taking a test.

Picture this:
You’re monitoring your class in the proctor view. One student seems stuck on a question, another keeps leaving full-screen, and someone else clearly didn’t read the instructions. Instead of calling them out across the room (or emailing after the fact), you can click their name, type a quick note, and it pops up right on their test page.

How it works

  • Click & type: From the monitor app, click the little “message” icon next to a student’s name.
  • Choose or write: Pick from a quick template like “Stay on the test page, please.” or type your own custom note (up to 500 characters).
  • Send: The message is instantly queued for that student.

When they get it

  • If they’re online (on that test): The message usually appears within seconds, in a blue notification box with an “OK” button.
  • If they’re not online yet: No worries—messages wait in a queue until they open that same test. As long as it’s within 24 hours, it’ll show up when they start.
  • If they’ve left the test: If they come back within 24 hours, the queued messages will be there. After that, the system automatically deletes them.

Either way, every message you send is permanently logged in the student’s proctoring record for that test—so even if they never see it in real time, you still have a record of what you sent.

A few tips

  • Keep it short and specific. The best in-test messages are direct (“Answer #3 needs a complete sentence.”) rather than vague.
  • If a student doesn’t respond or fix the issue, follow up in person or through your regular channels after the test.
  • Remember that messages are tied to a specific test—you can’t send a “global” message that will pop up no matter what page they’re on.

✨ New Feature: AI-Powered Lesson Plan Generator

Innovation Assessments now includes a powerful new tool for teachers: an AI-supported Lesson Plan Generator designed to help you produce detailed, standards-aligned plans in seconds — and in the exact format required by platforms like Proximity Learning.

Whether you’re preparing for a live session, writing plans for Canvas, or organizing your weekly teaching outline, this tool saves you time and effort — while keeping full control in your hands.


How It Works

Just enter a plain-English description of your lesson idea — including grade level, subject, standards, and your general teaching goals. Then, with a click, the AI returns a fully formatted lesson plan including:

  • Expanded, student-friendly interpretations of standards
  • Clear learning objectives
  • Essential questions
  • Instructional components: Bell ringer, direct instruction, guided and independent practice, and exit ticket
  • Cleanly formatted table ready to paste into Canvas or Docs
  • Optional links to official standards (like CCSS or TEKS)

You can even copy the full HTML with one click or paste directly into any editor.


Why This Matters

Creating high-quality, standards-aligned lesson plans takes time — especially when working across multiple platforms or preparing for virtual sessions. This tool helps teachers focus on what matters: great instruction, not paperwork.

  • No extra logins
  • No template juggling
  • No cutting and pasting from PDFs

Just describe what you want to teach — the AI handles the structure.

Coming Soon

We’re already working on enhancements, including:

  • Editable fields for fine-tuning after generation
  • Save and revisit past plans
  • Export to downloadable Word or PDF format
  • Support for CEFR levels and LanguageBird formatting
  • Integration with recurring weekly plans or pacing guides

Try It Now

Log into your teacher dashboard utilities tab and under Plan Book and look for:

AI Lesson Plan Generator

And let us know what you’d like to see next — we’re building this tool for working educators like you.

AI Lesson Planning Tool for Remote and Virtual Educators

Teachers working with platforms like Proximity Learning, LanguageBird, and other virtual schools can now generate full, standards-aligned lesson plans instantly using AI. This tool supports Common Core (CCSS), NGSS, and TEKS-aligned plans for grades K–12, and integrates seamlessly with Canvas, Google Docs, and other LMS platforms.

Whether you’re teaching remote students in New York, Texas, California, or across the U.S., the AI Lesson Plan Generator helps ensure every plan meets your school’s formatting and curriculum standards.

Supports: math, ELA, science, social studies, and world languages. Ideal for certified teachers delivering synchronous or asynchronous instruction online.

This feature is provided by Innovation Assessments LLC — helping educators streamline planning, grading, and engagement with ethical, secure AI tools.

New! AI-Powered Whole-Test Analysis for Short Answer Questions

At Innovation Assessments, we believe that AI should support — not replace — teacher judgment. Today, we’re introducing a new optional tool that does just that: an AI-powered performance analysis of a student’s entire short-answer test.

This feature is now available as part of your scoring tools for any test using the test app!


How It Works

Once a student has completed a test — and all short-answer items have been scored (manually, with AI, or both) — teachers can now run a full-test analysis with the click of a button.

The system sends the following to GPT-4:

  • Every short-answer question
  • The student’s response and earned credit
  • Model answers provided by the teacher
  • Overall test and item statistics (averages, medians, etc.)
  • Optional teacher instructions (e.g., “Focus on grammar and cohesion”)
  • AI guidance settings (grade level, language, writing style, etc.)

The result? A cohesive analysis of the student’s overall performance that you can use to:

  • Spot learning gaps or misconceptions
  • Suggest next steps
  • Generate report card comments
  • Guide parent-teacher conferences

You Stay in Control

Teachers can provide optional instructions before running the analysis. Want the AI to focus on organization? Fluency? Compare to class trends? Just add that guidance — the AI will listen.

Everything is transparent: what the AI sees, what it says, and how it got there. You can read, copy, and even edit the AI’s report if you choose to share it with the student.


A Word on Ethics

We deliberately chose not to automate this process behind the scenes. You must manually request the full analysis — so that this remains a teaching support tool, not a replacement for your judgment.

In fact, we hope the few extra seconds it takes to run the tool encourages thoughtful engagement rather than AI overuse.


Available Now

To try it:

  1. Score all short-answer items on a test.
  2. Go to the test analysis screen.
  3. Click Run AI Analysis next to a student.
  4. Wait for the modal to load — then read, reflect, and decide how to use the feedback.

We hope this helps deepen your understanding of student performance — and supports the kind of teaching that AI can never replace.

As always, questions or feedback welcome.

—The Innovation Assessments Team (which is basically just me, D. Jones)

How Innovation Assessments Uses ChatGPT to Support Educators

At Innovation Assessments, we’re always looking for ways to make life easier for teachers — and that’s why we’ve integrated ChatGPT directly into our platform.

This powerful AI tool helps educators generate prompts, review student work, analyze performance, and even assist with grading. It’s fast, reliable, and built into the same tools you already use on our site.

Whether you’re creating short-answer practice questions, looking for a model essay, or giving feedback on student responses, ChatGPT can help you save time and deliver meaningful support to learners. Better still, you’re always in control — the AI is a tool in your hands.

Behind the scenes, we’ve fine-tuned our prompts to match real classroom needs. From scoring rubrics to language-appropriate grammar support, we’ve trained the system to follow your lead.

We’re excited to keep developing even more smart tools powered by AI — always grounded in the realities of teaching.

Adjusting Assessment Scores: Why and How

When it comes to grading, scores are often reported on a simple 0-100 scale. But, in many cases, it’s better to adjust those scores to make sure they truly reflect how well a student has mastered the material. This adjustment process is often referred to as normalization, and one common way to do this is through a method called z-score standardization.

What is Z-Score Standardization?

Imagine a group of students who took the same test. Some students might have performed really well, while others might have struggled. If we simply average all the scores and compare them to a fixed passing threshold (like 70%), it wouldn’t be fair to those students who performed well beyond the average. Z-score standardization is a way of adjusting scores so that they fit a more accurate and fair scale.

How it works:

Z-Score Calculation: The z-score tells us how far a student’s score is from the average score, measured in terms of standard deviations (which is a fancy way of saying how spread out the scores are). A positive z-score means the student did better than average, and a negative z-score means the student did worse than average.

The formula for calculating a z-score is:

Adjusting Scores: Once we calculate each student’s z-score, we can adjust their scores to match a more standard scale. This is done by applying the z-score to the mean (average) and standard deviation of the group’s scores. The new score is calculated as:

This formula uses the student’s z-score to adjust the score based on how far it is from the group’s average.

Why Do This?

  1. Fairer Grading: By adjusting for how scores are distributed (e.g., a test with a very easy or very hard question), the scores become fairer, especially when comparing students across different groups or assessments.
  2. Removing Bias: Sometimes, individual test questions are biased or poorly written, affecting how students perform. Z-score standardization helps eliminate that bias by focusing on the overall performance of the group.
  3. Outlier Handling: The method also takes into account “outliers” (e.g., one or two students who either do extremely well or very poorly). These outliers can skew results, so they’re filtered out to make the adjusted scores more reliable.

What Does This Look Like in Practice?

Let’s say a student scores a 90 on a test, but the average score for the class is 75, with a standard deviation of 10. To calculate the z-score for the student, we use the formula:

This means the student’s score is 1.5 standard deviations above the class average.

Next, we use the z-score to adjust the student’s score. If we want to bring the class to a higher standard (let’s say the target mean is 80), we use the formula for adjusting the score:

So, the student’s adjusted score is now 95, reflecting their performance in relation to the class and the new target.

Z-score standardization is often mistaken for “curving” scores, but they are fundamentally different. Curving typically involves adjusting all scores on a test so that the highest score becomes a perfect score, or the average score is raised to a certain target (like 70%). This method can unfairly benefit some students and disadvantage others. In contrast, z-score standardization adjusts individual scores based on how far they are from the class average, ensuring that each student’s performance is evaluated relative to the entire group, not a fixed threshold. By considering the spread of scores (standard deviation) and handling outliers, z-score standardization provides a more accurate reflection of a student’s performance, removing the arbitrary nature of curving and offering a fairer and more statistically sound approach to grading.

Innovation makes it incredibly easy for teachers to adjust and standardize assessment scores with our powerful, user-friendly tool. By using z-score standardization, our app helps teachers fairly align scores to a standard scale, taking into account the unique distribution of each class’s performance. With automatic outlier detection and score adjustments, teachers no longer need to worry about arbitrary curving or biased grading. It’s an efficient, data-driven solution that ensures every student’s performance is evaluated accurately and equitably, all with minimal effort on the teacher’s part.

Can vocabulary knowledge predict content knowledge?

Can Vocabulary Knowledge Predict Content Knowledge? Unveiling Insights from Classroom Practice

Encountering a scholarly paper delving into curriculum-based measures (CBM) for content area secondary courses like social studies ignited my curiosity. Eager to implement and extend their research, I embarked on a journey within my own classroom. As an educator committed to maximizing my students’ potential, I aimed to investigate whether vocabulary knowledge could serve as a predictor of content comprehension. Through practical application and careful observation, I sought to unearth valuable insights to refine my teaching methodologies. Here’s what unfolded during this intriguing exploration.

The intersection of vocabulary and content knowledge has long been a subject of interest in education. While vocabulary is recognized as a fundamental component of academic proficiency, its role in anticipating students’ understanding of intricate subject matter remains a matter of debate. The paper I encountered proposed that assessing students’ vocabulary knowledge through CBM could offer valuable insights into their grasp of content area material, particularly in disciplines like social studies characterized by specialized terminology and concepts.

To test this hypothesis, I integrated vocabulary-focused CBM into my social studies curriculum, meticulously tracking students’ progress across multiple units. I developed targeted vocabulary assessments, quizzes, and assignments tailored to evaluate students’ familiarity with key terms and concepts relevant to each unit of study. Additionally, I incorporated vocabulary-building exercises into classroom activities, discussions, and readings to reinforce students’ comprehension and retention of subject-specific terminology.

Through continuous assessment and analysis, intriguing patterns began to emerge in students’ performance. Those exhibiting proficiency in vocabulary consistently demonstrated a deeper comprehension of the content material. They showcased their understanding by articulating complex ideas, drawing connections between different topics, and applying their knowledge in diverse contexts. Conversely, students grappling with vocabulary challenges often struggled to grasp the underlying concepts and themes presented in the curriculum.

One significant revelation was the predictive capacity of certain high-utility terms in gauging students’ overall content mastery. Terms acting as linchpins or conceptual anchors within the curriculum correlated strongly with students’ performance on unit assessments and projects. By prioritizing the teaching and reinforcement of these critical vocabulary terms, I could scaffold students’ learning and facilitate deeper engagement with the subject matter.

Moreover, I observed that vocabulary instruction served as a gateway to content proficiency, enabling students to access and comprehend complex texts, primary sources, and multimedia resources more effectively. Equipping students with the linguistic tools to decode and interpret content area material empowered them to become more independent and self-directed learners.

However, it’s crucial to acknowledge the limitations of relying solely on vocabulary knowledge as a predictor of content understanding. While vocabulary forms a foundational aspect of academic literacy, it should be viewed as part of a broader assessment framework. Factors such as background knowledge, cognitive skills, and socio-cultural influences also significantly influence students’ learning experiences and outcomes.

In conclusion, my exploration into the relationship between vocabulary knowledge, CBM, and content understanding provided valuable insights into student learning dynamics within the social studies classroom. While vocabulary instruction can undoubtedly enhance students’ comprehension and retention of subject matter, it should complement a comprehensive approach to teaching and learning. By integrating targeted vocabulary CBM with engaging content-based activities and assessments, educators can create enriching learning experiences that foster deep understanding and critical thinking skills in students.