Introducing SlideCraft: Collaborative Presentations Without the Formatting Distraction

One of the most effective ways for students to master new content is to own it. When a student has to synthesize a topic, identify what matters, and teach it back to their peers, the learning sticks.

However, in a typical classroom, “making a presentation” often turns into a week-long odyssey of font choices, transitions, and image cropping. The actual thinking—the synthesis—gets buried under the formatting.

That’s why we built SlideCraft. It’s a new tool within Innovation Assessments designed for speed, accountability, and meaningful participation. It’s not a full-featured slide editor; it’s a structured workflow that turns a class’s collective research into a ready-to-present deck in minutes.

The Problem with “Death by PowerPoint” (and Canva, and Slides…)

In many EdTech tools, “engagement” is equated with gamification—points, music, and flashy animations. At Innovation, we believe real engagement is cognitive load. We want students focusing on the history, the science, or the literature, not the “rules of the game” or the aesthetic of a slide border.

SlideCraft is built for a specific, powerful classroom pattern:

  1. The Hook: The teacher introduces a topic.
  2. The Task: Students are assigned specific subtopics or “jigsaw” pieces.
  3. The Build: Students research quickly and build exactly one slide.
  4. The Share: The class presents the completed, unified deck immediately.

How It Works: Designed for the Live Classroom

SlideCraft lives in two places: your prep time and your live instruction.

Teacher Setup (The Prep) In configuration, you build the skeleton of the lesson. You can add up to five starter slides (intro, instructions, or framing) and then define the “prompts” students will receive. These prompts are reusable, meaning you can run the same activity with five different sections without rebuilding the wheel.

The Live Session (The Action) When class starts, you launch the Live Host from your course playlist. Students join via a link from their login page and are automatically assigned one of your prompts.

As they work, you can:

  • Monitor incoming drafts in real-time.
  • Set a countdown timer or stop the session manually.
  • Autosave everything: Because this is built for real-school Wi-Fi and interruptions, student work is preserved constantly as they type.

What Students See: Focus over Frills

The student interface is intentionally lean. There are no menus for “WordArt” or background gradients. Students see:

  • Their assigned title and specific instructions.
  • A field for concise bullet points.
  • An image upload (optional).
  • A Source URL field: This is critical. By making the source a required part of the “Craft,” we reinforce academic integrity from the first click.

From “Building” to “Presenting” in One Click

The moment you stop the build session, the host view transforms into a presentation stage.

The finished deck is automatically assembled: your intro slides first, followed by the student-generated content. During the presentation, the teacher has access to a Presenter Timer and a Show Sources toggle. This allows you to pause the lesson and discuss source credibility or authority on the fly—turning a student slide into a teachable moment about information literacy.

Accountability and Scoring

SlideCraft isn’t just an “activity”—it’s an assessment. Once the presentation is over, the work doesn’t disappear. All student submissions are saved for review. Using the familiar Submissions and Score tools, you can:

  • Evaluate slides using your existing rubrics.
  • Score based on the quality of the bullets and the reliability of the sources.
  • Provide written feedback and release evaluations to students.

A First Use Case: The French Revolution

Imagine a lesson on the causes of the French Revolution.

  • Teacher Intro: 3 slides on the monarchy and the Three Estates.
  • The Build: Students are assigned prompts like The Bread Crisis, Enlightenment Ideas, The American Influence, and Louis XVI’s Debt.
  • The Result: Within 15 minutes, you have a 25-slide deck built by the class.

You aren’t just lecturing; the students are providing the evidence.

SlideCraft fills the gap between passive slide-viewing and time-consuming independent projects. It’s built for teachers who want their students to be active, collaborative, and accountable—without the “formatting fatigue.”

If you’re ready to turn your next research burst into a live class product, SlideCraft is ready for you in the Innovation dashboard.

The Growth Bonus: Rewarding Improvement While Maintaining Academic Standards

Two students submit essays that both receive a score of 75.

At first glance, their performance appears identical. But the stories behind those two scores may be very different. One student might have scored a 74 on the previous assignment—essentially maintaining the same level of work. Another might have improved dramatically from a 60.

In both cases the essays themselves may be similar in quality. Yet one student clearly demonstrated substantial learning along the way.

This raises an interesting question for teachers: should grades reflect only the current piece of work, or should they also recognize improvement over time?

In many courses, particularly those that emphasize writing and analytical thinking, improvement is an important part of the learning process. Students revise strategies, incorporate feedback, and gradually strengthen their arguments and use of evidence.

To recognize that progress without distorting the meaning of grades, some assignments may include what we call a growth bonus.

The idea is simple: meaningful improvement deserves recognition—but the quality of the current work must still matter most.


How the Growth Bonus Works

The growth bonus uses a mathematical rule that compares the current score with a previous comparable assignment.

Three values are involved:

R – the raw score on the current assignment
B – the score from a previous assignment
T – a readiness target representing strong course-level work (often around 82)

The adjusted score is calculated as:

Adjusted = max(R, R + 0.8 × max(0, R − B) − 0.2 × max(0, T − R))

In plain language, the formula does three things at the same time.

First, it rewards improvement from the previous assignment. If a student improves by ten points, most of that improvement is reflected in the adjustment.

Second, it moderates extremely large score jumps when the current essay is still below the level expected for the course. This keeps the adjustment from turning a developing essay into a top-tier score.

Finally—and importantly—the formula guarantees that the adjusted score can never be lower than the original score.

The growth bonus can help a score. It cannot hurt it.


A Quick Example

Suppose a student scored 61 on a previous essay and 72 on the current one.

The improvement is:

72 − 61 = 11

Most of that improvement is rewarded:

0.8 × 11 = 8.8

Because the essay is still somewhat below the readiness target of 82, a small moderating adjustment is applied:

0.2 × (82 − 72) = 2

The adjusted score becomes:

72 + 8.8 − 2 = 78.8

The student’s improvement is recognized, but the final score still reflects the level of the current work.


What Happens If the Score Declines?

If the new score is lower than the previous one, the improvement term becomes zero. In theory the formula could produce a slightly lower number—but the rule

max(R, …)

ensures that the final score never drops below the original score.

In practice, this simply means the raw score stands as it is.


Why Not Just Use Standardization?

This approach adjusts scores based on the statistical distribution of scores in the class.

A simplified version of the formula looks like this:

Standardized score = ((R − μ) / σ) × s + m

Here:

R is the raw score,
μ is the class average,
σ is the standard deviation,
and the constants s and m determine the new spread and average of the scores.

Standardization can be useful when a test turns out to be unusually difficult or unusually easy. However, it measures performance relative to the class rather than improvement over time.

In some cases it can also produce surprisingly large adjustments. A raw score in the low seventies might become a ninety simply because the class average was low.

The growth bonus approach focuses instead on learning progress—recognizing students who improve while still keeping grades tied closely to the quality of the work itself.


Why the Readiness Target Matters

The readiness target used in the formula—often around 82—represents the level of performance typically associated with strong work on AP-style writing rubrics.

It is not a passing threshold or a minimum expectation. Instead, it serves as a reference point that helps keep score adjustments realistic.

Students who are already writing at a strong level will see modest adjustments. Students who are improving rapidly will see more noticeable ones.


The Larger Goal

Ultimately, the purpose of the growth bonus is not to inflate grades. It is to encourage the kinds of behaviors that lead to real academic progress: revising writing strategies, strengthening arguments, integrating evidence more effectively, and improving clarity and precision of language.

Grades should communicate meaningful information about learning. They should reflect both where a student stands today and how far that student has come.

The growth bonus is one way of recognizing both.

Precision in Assessment: Why Standardization Outperforms the Traditional “Curve”

In secondary and post-secondary education, teachers often face a “measurement gap.” This occurs when a highly rigorous assessment—such as a mock professional exam or a complex technical project—yields raw scores that accurately reflect performance benchmarks but fail to align with the broader institutional grading scale.

To bridge this gap, many educators rely on a “curve.” However, traditional curving often lacks statistical validity. Standardization, specifically through the use of Z-scores, offers a more mathematically sound and equitable alternative.

The Limitations of Common “Curves”

The term “curve” is frequently applied to two common but flawed methods:

  1. The Flat-Point Addition: Adding a set number of percentage points to every student. While “fair” in its uniformity, it does nothing to address the variance or “spread” of the scores.
  2. The Ceiling Curve: Adjusting the highest score to 100% and shifting others accordingly. This makes the entire class’s grades dependent on a single outlier, which can lead to volatile and inconsistent results.

These methods are essentially “band-aids” that fail to account for the relative performance of the cohort.

The Logic of Standardization (Z-Scores)

Standardization treats a set of scores as a distribution. By converting raw scores into Z-scores, we determine exactly how many standard deviations a student’s performance sits above or below the group mean.

The formula for calculating a Z-score is: z = (x – μ) / σ (Where x is the raw score, μ is the mean, and σ is the standard deviation.)

Once we have the Z-score, we can “re-map” it onto a target distribution (such as a school’s historical GPA mean). This ensures that a student who performs at the 90th percentile on a difficult assessment is rewarded with a grade that reflects that 90th-percentile standing in the gradebook.

Why Standardization is the Professional Choice

  • Maintains Rubric Integrity: Educators can grade with extreme rigor against high-level standards without fear of destroying a student’s GPA. The raw feedback remains honest, while the gradebook remains fair.
  • Corrects for Assessment Difficulty: Not every test is of equal difficulty. Standardization automatically adjusts for a test that was “too hard” or “too easy” by focusing on the student’s relative mastery within the cohort.
  • Statistical Defensibility: If a grade is challenged, the educator can point to a transparent, mathematical process based on the class distribution rather than an arbitrary “bump” in points.

By adopting standardization, we move away from “adjusting numbers” and toward “aligning distributions.” This practice respects the data produced by the assessment while ensuring that the final grade accurately reflects a student’s standing within the academic environment.

Innovation Assessments LMS: Next-Gen Release

  • We’ve tightened core authentication: teachers and students can now connect Google Sign-In, making it easier to jump into any of the 12 apps (Étude, Test, Grammar, Writing, Word Study, Ordered List, Conversation, Chat, Forum, Media, Ventura, and the course hub) without managing multiple passwords.
  • Teachers get sharper control and visibility: My Students now links directly into a new Manage Enrolments matrix for one-click course assignments. Course pages display Canvas-style visibility badges so you can hide or reveal tasks instantly, and the student course view hides anything marked invisible. Submission dashboards in Étude, Test, Grammar, Conversation, Ordered List, and Writing now sort alphabetically and drop the developer-only raw JSON viewers, so grading workflows stay focused.
  • Live and asynchronous speaking tools are more authentic. Conversation tasks and upload workflows now coach students on cadence—natural rhythm, pauses, and intonation—plus we added an easy-to-copy student join link in Chat Monitor so hosts can share the live room with a single click. Chat’s monitor dashboard also highlights AI usage limits, host controls, and live transcripts for each room.
  • The LMS navigation feels smarter: “Login Preferences” is available to both teachers and students (with role-aware sidebars, help text, and instant Google linking), the teacher sidebar is scrollable and auto-hides developer tools unless you’re David Jones, and “Manage Rubrics” now supports copy-modify/delete operations with a single tap.
  • Media and forum workflows keep pace. Teachers can bulk-copy forums, toggle task visibility, and delete an entire forum plus its threads/posts with FK-safe cascading. The Create Task page has unified button styling, refreshed descriptions, and an updated “Coming Soon” panel (Survey, Audio Playlist, File on Server, Link to Webpage), making it easier to explain each element to staff.

App-by-App Highlights

  • Étude/Test/Grammar/Writing: Alphabetized submission rosters, lightweight UIs, AI license tracking in Grammar/Writing, and Google linking across all auth flows. Live tests record into test_task_student_responses, and we expose the schema dynamically via the updated inspect_schema.php.
  • Conversation (Record & Upload): Cadence coaching cards, Chromebook recording guides in the sidebar, admission logging, and Safari upload workflows that mirror the in-browser recorder.
  • Chat: Google Sign-In ready, live monitor with copyable join link, AI partner/peer modes, host controls (start/close, pair, reshuffle), and cadence-free text boxes (spellcheck disabled in chat and writ) to keep speech and writing authentic.
  • Forum: Manage Rubrics now clones/deletes as needed; forum visibility obeys course toggles; forum submissions keep their evaluation links; deleting a forum removes all threads/posts/AI usage.
  • Course Hub: Manage Enrolments grid, Canvas-style visibility icons, module-level controls, and student course views that hide anything not marked visible.
  • Media/Ventura/Word Study/Ordered List: Unified CTA styling, better descriptions, topic-specific guidance, and easy access to each element’s evaluation links from the student side.

Why It Matters

This release cements Innovation Assessments as a coherent suite rather than 12 separate tools. Authentication, visibility, enrolment, cadence training, grading, and schema inspection all work the same way across apps. Teachers can share tasks, manage rubrics, and run live sessions with less friction; students get natural speaking guidance, Google sign-in, and cleaner course views. We’re pushing the beta to customers now—watch for teacher-to-teacher sharing options (one-time copy codes) coming soon.

AI at Innovation: Three Ways Our Tools Support Teachers

Artificial intelligence isn’t here to replace teachers—it’s here to make their work more efficient, insightful, and impactful. At Innovation Assessments, we’ve built AI into our platform in three carefully designed ways. Each of these functions addresses a different part of the teaching cycle: preparing lessons, evaluating student work, and monitoring learning behaviors in real time.

Let’s take a closer look.


1. Teaching Assistant: Generating Prompts, Tasks, and Test Questions

Teachers often spend countless hours preparing materials: prompts for writing, comprehension tasks, practice questions, or even entire quizzes. Our AI-powered Teaching Assistant helps cut that prep time by generating high-quality starting points:

  • Assessment & activity prompts: Suggests open-ended discussion questions, role-play scenarios, or practice drills tailored to your subject.
  • Test question generation: Builds multiple-choice or short-answer items aligned to your chosen level and category, whether it’s social studies DBQs, French language tasks, or science practice sets.
  • Adaptability: Because the generator accepts teacher input on topic, difficulty, and format, you still set the pedagogical direction—the AI just does the heavy lifting.

The result? More time to focus on pedagogy and less on busywork.


2. Grading Assistant: Scoring Short Answers and Longer Essays

Grading is where AI can provide meaningful support without ever removing teacher authority. Our Grading Assistant uses OpenAI’s models to analyze student responses and offer suggested scores or rubric-based comments:

  • Short answer scoring: Provides a confidence-scaled score (e.g., full credit, partial credit) with a rationale tied to your rubric.
  • Essay analysis: Surfaces structure, clarity, and argument strengths/weaknesses so you can give students faster, more targeted feedback.
  • Teacher control: Every score is a suggestion—teachers make the final call. AI never replaces professional judgment.

This approach reduces turnaround time and makes it easier to give richer feedback, even on assignments with dozens of responses.


3. Proctor Function: Analyzing Student Activity in Online Apps

Digital classrooms introduce new challenges: how do you know if students are fully engaged, struggling, or even drifting off task? Our Proctor Function gives teachers insight into behavior patterns during online interactions:

  • Session monitoring: Tracks student activity logs (e.g., navigation events, copy/paste, time away from page).
  • Pattern analysis: Uses AI to highlight irregularities—like frequent page exits during a quiz—or flag potential academic integrity concerns.
  • Formative insights: Goes beyond “cheating detection” by helping you spot disengagement, pacing issues, or moments when students may need extra support.

Think of it as a lens into classroom dynamics that’s hard to see in a virtual environment.


Why These Three?

We chose these categories—Teaching Assistant, Grading Assistant, Proctor—because together they cover the full arc of digital instruction:

  1. Before class (plan): generate engaging materials.
  2. After class (assess): provide consistent, fast feedback.
  3. During class (monitor): ensure students are active and supported.

Our guiding principle: AI should serve teachers, never the other way around.

Introducing Weighted Questions: The Smart Way to Grade Your Tests


Have you ever created a test where some questions were simply more important than others? Perhaps a single-sentence response question you intended as a quick check for understanding, and a more complex, multi-paragraph essay question that required a deeper analysis.

Or have you decided after the test that some questions need to be removed? Or correct the answer key and you need to recalculate scores?

With our latest update, you can now assign specific weights to each question on your tests, allowing you to create more nuanced and accurate assessments.

What’s New?

You now have the power to define the value of every question. When you’re editing your test, you can set the weight for each question, for example:

  • Multiple-choice questions worth 1 point.
  • Short answer questions worth 5 points.
  • An essay question worth 20 points.

Our new scoring system will automatically calculate the final score for each student based on the weights you set.

The Power of Re-Scoring

But what if you decide to change the weights after students have already taken the test? This is where the magic happens.

With the click of a single button, you can now re-score an entire class. Our new and improved algorithm will re-calculate every student’s grade, taking into account:

  • Updated Question Weights: If you change a question’s value, the scores will be instantly updated.
  • Answer Key Corrections: Did you find a mistake in the answer key? Correct it, and every student’s score will be recalculated.
  • Changes in Question Count: If you add or delete questions, the system will adjust the final score accordingly.

And it’s fast. This new re-scoring capability is built on a highly optimized system that can process hundreds of students and questions in a fraction of the time. We’ve even addressed edge cases, such as students who have had their old answers automatically archived, to ensure accurate and reliable results every time.

Our goal is to give you more flexibility and control over your assessments, so you can focus on teaching. Try out the new weighted questions feature today and see the difference.


NEW! AI Analysis of Proctor Notes (Student Engagement on Tests)

We’re excited to introduce a brand-new tool for teachers: AI Proctor Analysis. This feature takes the detailed proctoring logs collected during online assessments and automatically generates a professional, concise summary of student behavior—helping teachers spot issues faster and focus on teaching instead of sifting through logs.

How it Works

During an assessment, our system records digital behavior events such as page switches, text pasting, and other activity. These notes are stored securely in the teacher’s proctoring database.

With the new feature:

  1. Logs are gathered – For each student and test, the platform collects all behavior notes.
  2. Cleaned & organized – Duplicate or redundant entries are filtered so the report is readable.
  3. Analyzed by AI – The logs are sent through our secure AI integration. The AI is instructed to act as a strict test proctor, highlighting suspicious or irregular activity.
  4. Teacher summary – In just a few sentences, the AI generates a professional summary for the teacher, flagging potential problems and confirming if behavior was normal.

Why This Matters

  • Time-saving: No more scrolling through long behavior logs.
  • Professional tone: Reports are short, objective, and easy to share.
  • Enhanced oversight: Teachers get a clearer picture of digital test behavior at a glance.

Example

Instead of wading through dozens of raw log entries like:

Started task
Left page
Returned to test
Text pasted

The teacher sees a clear summary such as:

“The student briefly left the page twice and pasted text once. Behavior suggests potential use of outside resources. Recommend follow-up.”

Built with Security in Mind

  • Only authenticated teachers can access proctoring data.
  • Student activity logs are processed securely.
  • Every AI request is logged for accountability, including token usage and teacher identifiers.

The bottom line: With AI Proctor Analysis, you’ll spend less time interpreting logs and more time making informed decisions about your students’ online assessment behavior.


New Feature: Teacher Comments on Short Answer Questions

This week when giving pretests for AP French, a teacher found they needed the ability to leave notes directly on individual student responses. Today I rolled out a new feature that makes that possible.

What’s New

When reviewing a student’s short answer question, teachers will now see a “+ Comment” button alongside the scoring tools. Clicking this button opens a simple dialog where you can type in your feedback.

  • Comments are saved directly to the system, so they’re always there the next time you revisit the student’s work.
  • Each comment is linked to a specific student and specific question, so there’s no confusion about what the feedback refers to.
  • You can edit or delete your comment at any time.
  • An optional toggle lets you make the comment visible to the student, so it can be private teacher notes or shared feedback.

Why It Matters

This update is designed to give teachers more flexibility:

  • You can jot down quick reminders for yourself (“Check this student’s phrasing with the rubric later”).
  • You can leave direct feedback for students to help them improve.
  • You can build up a history of feedback that follows the student across sessions.

Keeping It Simple

I’ve worked to keep this feature as lightweight as possible:

  • Comments save instantly, no extra steps required.
  • The display is clean, with a simple box showing the comment, teacher name, and timestamp.
  • For students, if you’ve chosen to make a comment visible, they’ll see it in their results view — but private notes stay private.

New Feature: Send Real-Time Messages to Students During a Test

We just rolled out something that a lot of you have been asking for: you can now send quick, targeted messages straight to a student while they’re in the middle of taking a test.

Picture this:
You’re monitoring your class in the proctor view. One student seems stuck on a question, another keeps leaving full-screen, and someone else clearly didn’t read the instructions. Instead of calling them out across the room (or emailing after the fact), you can click their name, type a quick note, and it pops up right on their test page.

How it works

  • Click & type: From the monitor app, click the little “message” icon next to a student’s name.
  • Choose or write: Pick from a quick template like “Stay on the test page, please.” or type your own custom note (up to 500 characters).
  • Send: The message is instantly queued for that student.

When they get it

  • If they’re online (on that test): The message usually appears within seconds, in a blue notification box with an “OK” button.
  • If they’re not online yet: No worries—messages wait in a queue until they open that same test. As long as it’s within 24 hours, it’ll show up when they start.
  • If they’ve left the test: If they come back within 24 hours, the queued messages will be there. After that, the system automatically deletes them.

Either way, every message you send is permanently logged in the student’s proctoring record for that test—so even if they never see it in real time, you still have a record of what you sent.

A few tips

  • Keep it short and specific. The best in-test messages are direct (“Answer #3 needs a complete sentence.”) rather than vague.
  • If a student doesn’t respond or fix the issue, follow up in person or through your regular channels after the test.
  • Remember that messages are tied to a specific test—you can’t send a “global” message that will pop up no matter what page they’re on.

✨ New Feature: AI-Powered Lesson Plan Generator

Innovation Assessments now includes a powerful new tool for teachers: an AI-supported Lesson Plan Generator designed to help you produce detailed, standards-aligned plans in seconds — and in the exact format required by platforms like Proximity Learning.

Whether you’re preparing for a live session, writing plans for Canvas, or organizing your weekly teaching outline, this tool saves you time and effort — while keeping full control in your hands.


How It Works

Just enter a plain-English description of your lesson idea — including grade level, subject, standards, and your general teaching goals. Then, with a click, the AI returns a fully formatted lesson plan including:

  • Expanded, student-friendly interpretations of standards
  • Clear learning objectives
  • Essential questions
  • Instructional components: Bell ringer, direct instruction, guided and independent practice, and exit ticket
  • Cleanly formatted table ready to paste into Canvas or Docs
  • Optional links to official standards (like CCSS or TEKS)

You can even copy the full HTML with one click or paste directly into any editor.


Why This Matters

Creating high-quality, standards-aligned lesson plans takes time — especially when working across multiple platforms or preparing for virtual sessions. This tool helps teachers focus on what matters: great instruction, not paperwork.

  • No extra logins
  • No template juggling
  • No cutting and pasting from PDFs

Just describe what you want to teach — the AI handles the structure.

Coming Soon

We’re already working on enhancements, including:

  • Editable fields for fine-tuning after generation
  • Save and revisit past plans
  • Export to downloadable Word or PDF format
  • Support for CEFR levels and LanguageBird formatting
  • Integration with recurring weekly plans or pacing guides

Try It Now

Log into your teacher dashboard utilities tab and under Plan Book and look for:

AI Lesson Plan Generator

And let us know what you’d like to see next — we’re building this tool for working educators like you.

AI Lesson Planning Tool for Remote and Virtual Educators

Teachers working with platforms like Proximity Learning, LanguageBird, and other virtual schools can now generate full, standards-aligned lesson plans instantly using AI. This tool supports Common Core (CCSS), NGSS, and TEKS-aligned plans for grades K–12, and integrates seamlessly with Canvas, Google Docs, and other LMS platforms.

Whether you’re teaching remote students in New York, Texas, California, or across the U.S., the AI Lesson Plan Generator helps ensure every plan meets your school’s formatting and curriculum standards.

Supports: math, ELA, science, social studies, and world languages. Ideal for certified teachers delivering synchronous or asynchronous instruction online.

This feature is provided by Innovation Assessments LLC — helping educators streamline planning, grading, and engagement with ethical, secure AI tools.