Introducing SlideCraft: Collaborative Presentations Without the Formatting Distraction

One of the most effective ways for students to master new content is to own it. When a student has to synthesize a topic, identify what matters, and teach it back to their peers, the learning sticks.

However, in a typical classroom, “making a presentation” often turns into a week-long odyssey of font choices, transitions, and image cropping. The actual thinking—the synthesis—gets buried under the formatting.

That’s why we built SlideCraft. It’s a new tool within Innovation Assessments designed for speed, accountability, and meaningful participation. It’s not a full-featured slide editor; it’s a structured workflow that turns a class’s collective research into a ready-to-present deck in minutes.

The Problem with “Death by PowerPoint” (and Canva, and Slides…)

In many EdTech tools, “engagement” is equated with gamification—points, music, and flashy animations. At Innovation, we believe real engagement is cognitive load. We want students focusing on the history, the science, or the literature, not the “rules of the game” or the aesthetic of a slide border.

SlideCraft is built for a specific, powerful classroom pattern:

  1. The Hook: The teacher introduces a topic.
  2. The Task: Students are assigned specific subtopics or “jigsaw” pieces.
  3. The Build: Students research quickly and build exactly one slide.
  4. The Share: The class presents the completed, unified deck immediately.

How It Works: Designed for the Live Classroom

SlideCraft lives in two places: your prep time and your live instruction.

Teacher Setup (The Prep) In configuration, you build the skeleton of the lesson. You can add up to five starter slides (intro, instructions, or framing) and then define the “prompts” students will receive. These prompts are reusable, meaning you can run the same activity with five different sections without rebuilding the wheel.

The Live Session (The Action) When class starts, you launch the Live Host from your course playlist. Students join via a link from their login page and are automatically assigned one of your prompts.

As they work, you can:

  • Monitor incoming drafts in real-time.
  • Set a countdown timer or stop the session manually.
  • Autosave everything: Because this is built for real-school Wi-Fi and interruptions, student work is preserved constantly as they type.

What Students See: Focus over Frills

The student interface is intentionally lean. There are no menus for “WordArt” or background gradients. Students see:

  • Their assigned title and specific instructions.
  • A field for concise bullet points.
  • An image upload (optional).
  • A Source URL field: This is critical. By making the source a required part of the “Craft,” we reinforce academic integrity from the first click.

From “Building” to “Presenting” in One Click

The moment you stop the build session, the host view transforms into a presentation stage.

The finished deck is automatically assembled: your intro slides first, followed by the student-generated content. During the presentation, the teacher has access to a Presenter Timer and a Show Sources toggle. This allows you to pause the lesson and discuss source credibility or authority on the fly—turning a student slide into a teachable moment about information literacy.

Accountability and Scoring

SlideCraft isn’t just an “activity”—it’s an assessment. Once the presentation is over, the work doesn’t disappear. All student submissions are saved for review. Using the familiar Submissions and Score tools, you can:

  • Evaluate slides using your existing rubrics.
  • Score based on the quality of the bullets and the reliability of the sources.
  • Provide written feedback and release evaluations to students.

A First Use Case: The French Revolution

Imagine a lesson on the causes of the French Revolution.

  • Teacher Intro: 3 slides on the monarchy and the Three Estates.
  • The Build: Students are assigned prompts like The Bread Crisis, Enlightenment Ideas, The American Influence, and Louis XVI’s Debt.
  • The Result: Within 15 minutes, you have a 25-slide deck built by the class.

You aren’t just lecturing; the students are providing the evidence.

SlideCraft fills the gap between passive slide-viewing and time-consuming independent projects. It’s built for teachers who want their students to be active, collaborative, and accountable—without the “formatting fatigue.”

If you’re ready to turn your next research burst into a live class product, SlideCraft is ready for you in the Innovation dashboard.

The Growth Bonus: Rewarding Improvement While Maintaining Academic Standards

Two students submit essays that both receive a score of 75.

At first glance, their performance appears identical. But the stories behind those two scores may be very different. One student might have scored a 74 on the previous assignment—essentially maintaining the same level of work. Another might have improved dramatically from a 60.

In both cases the essays themselves may be similar in quality. Yet one student clearly demonstrated substantial learning along the way.

This raises an interesting question for teachers: should grades reflect only the current piece of work, or should they also recognize improvement over time?

In many courses, particularly those that emphasize writing and analytical thinking, improvement is an important part of the learning process. Students revise strategies, incorporate feedback, and gradually strengthen their arguments and use of evidence.

To recognize that progress without distorting the meaning of grades, some assignments may include what we call a growth bonus.

The idea is simple: meaningful improvement deserves recognition—but the quality of the current work must still matter most.


How the Growth Bonus Works

The growth bonus uses a mathematical rule that compares the current score with a previous comparable assignment.

Three values are involved:

R – the raw score on the current assignment
B – the score from a previous assignment
T – a readiness target representing strong course-level work (often around 82)

The adjusted score is calculated as:

Adjusted = max(R, R + 0.8 × max(0, R − B) − 0.2 × max(0, T − R))

In plain language, the formula does three things at the same time.

First, it rewards improvement from the previous assignment. If a student improves by ten points, most of that improvement is reflected in the adjustment.

Second, it moderates extremely large score jumps when the current essay is still below the level expected for the course. This keeps the adjustment from turning a developing essay into a top-tier score.

Finally—and importantly—the formula guarantees that the adjusted score can never be lower than the original score.

The growth bonus can help a score. It cannot hurt it.


A Quick Example

Suppose a student scored 61 on a previous essay and 72 on the current one.

The improvement is:

72 − 61 = 11

Most of that improvement is rewarded:

0.8 × 11 = 8.8

Because the essay is still somewhat below the readiness target of 82, a small moderating adjustment is applied:

0.2 × (82 − 72) = 2

The adjusted score becomes:

72 + 8.8 − 2 = 78.8

The student’s improvement is recognized, but the final score still reflects the level of the current work.


What Happens If the Score Declines?

If the new score is lower than the previous one, the improvement term becomes zero. In theory the formula could produce a slightly lower number—but the rule

max(R, …)

ensures that the final score never drops below the original score.

In practice, this simply means the raw score stands as it is.


Why Not Just Use Standardization?

This approach adjusts scores based on the statistical distribution of scores in the class.

A simplified version of the formula looks like this:

Standardized score = ((R − μ) / σ) × s + m

Here:

R is the raw score,
μ is the class average,
σ is the standard deviation,
and the constants s and m determine the new spread and average of the scores.

Standardization can be useful when a test turns out to be unusually difficult or unusually easy. However, it measures performance relative to the class rather than improvement over time.

In some cases it can also produce surprisingly large adjustments. A raw score in the low seventies might become a ninety simply because the class average was low.

The growth bonus approach focuses instead on learning progress—recognizing students who improve while still keeping grades tied closely to the quality of the work itself.


Why the Readiness Target Matters

The readiness target used in the formula—often around 82—represents the level of performance typically associated with strong work on AP-style writing rubrics.

It is not a passing threshold or a minimum expectation. Instead, it serves as a reference point that helps keep score adjustments realistic.

Students who are already writing at a strong level will see modest adjustments. Students who are improving rapidly will see more noticeable ones.


The Larger Goal

Ultimately, the purpose of the growth bonus is not to inflate grades. It is to encourage the kinds of behaviors that lead to real academic progress: revising writing strategies, strengthening arguments, integrating evidence more effectively, and improving clarity and precision of language.

Grades should communicate meaningful information about learning. They should reflect both where a student stands today and how far that student has come.

The growth bonus is one way of recognizing both.

Innovation Assessments LMS: Next-Gen Release

  • We’ve tightened core authentication: teachers and students can now connect Google Sign-In, making it easier to jump into any of the 12 apps (Étude, Test, Grammar, Writing, Word Study, Ordered List, Conversation, Chat, Forum, Media, Ventura, and the course hub) without managing multiple passwords.
  • Teachers get sharper control and visibility: My Students now links directly into a new Manage Enrolments matrix for one-click course assignments. Course pages display Canvas-style visibility badges so you can hide or reveal tasks instantly, and the student course view hides anything marked invisible. Submission dashboards in Étude, Test, Grammar, Conversation, Ordered List, and Writing now sort alphabetically and drop the developer-only raw JSON viewers, so grading workflows stay focused.
  • Live and asynchronous speaking tools are more authentic. Conversation tasks and upload workflows now coach students on cadence—natural rhythm, pauses, and intonation—plus we added an easy-to-copy student join link in Chat Monitor so hosts can share the live room with a single click. Chat’s monitor dashboard also highlights AI usage limits, host controls, and live transcripts for each room.
  • The LMS navigation feels smarter: “Login Preferences” is available to both teachers and students (with role-aware sidebars, help text, and instant Google linking), the teacher sidebar is scrollable and auto-hides developer tools unless you’re David Jones, and “Manage Rubrics” now supports copy-modify/delete operations with a single tap.
  • Media and forum workflows keep pace. Teachers can bulk-copy forums, toggle task visibility, and delete an entire forum plus its threads/posts with FK-safe cascading. The Create Task page has unified button styling, refreshed descriptions, and an updated “Coming Soon” panel (Survey, Audio Playlist, File on Server, Link to Webpage), making it easier to explain each element to staff.

App-by-App Highlights

  • Étude/Test/Grammar/Writing: Alphabetized submission rosters, lightweight UIs, AI license tracking in Grammar/Writing, and Google linking across all auth flows. Live tests record into test_task_student_responses, and we expose the schema dynamically via the updated inspect_schema.php.
  • Conversation (Record & Upload): Cadence coaching cards, Chromebook recording guides in the sidebar, admission logging, and Safari upload workflows that mirror the in-browser recorder.
  • Chat: Google Sign-In ready, live monitor with copyable join link, AI partner/peer modes, host controls (start/close, pair, reshuffle), and cadence-free text boxes (spellcheck disabled in chat and writ) to keep speech and writing authentic.
  • Forum: Manage Rubrics now clones/deletes as needed; forum visibility obeys course toggles; forum submissions keep their evaluation links; deleting a forum removes all threads/posts/AI usage.
  • Course Hub: Manage Enrolments grid, Canvas-style visibility icons, module-level controls, and student course views that hide anything not marked visible.
  • Media/Ventura/Word Study/Ordered List: Unified CTA styling, better descriptions, topic-specific guidance, and easy access to each element’s evaluation links from the student side.

Why It Matters

This release cements Innovation Assessments as a coherent suite rather than 12 separate tools. Authentication, visibility, enrolment, cadence training, grading, and schema inspection all work the same way across apps. Teachers can share tasks, manage rubrics, and run live sessions with less friction; students get natural speaking guidance, Google sign-in, and cleaner course views. We’re pushing the beta to customers now—watch for teacher-to-teacher sharing options (one-time copy codes) coming soon.

Trust, Accountability, and A.I. in Education

Robert Capps authored an article appearing in the New York Times in June 2025 entitled A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. The article articulates some key ideas about the direction of A.I. in intellectual work that, in my view, have strong relevance to education.

Mr. Capps believes that there are three areas where humans will continue to be necessary in an economy with A.I. presence: trust, integration, and taste.

“As A.I. continues to become more influential in our jobs and organizations,” writes Mr. Capps, “we’re going to develop a lot of these trust issues. Solving them will require humans.” “The ‘trust issues’ he refers to stem from the fact that A.I. can generate large amounts of data, but that doesn’t make it inherently trustworthy.

We will need humans to check the data and to check the ethics of what is being produced. Trust, he writes, is about accountability and who is taking responsibility for the work product that A.I. has been used to produce. “In a number of fields”, he writes, “from law to architecture, A.I. will be able to do much of the basic work customers need… but at some point, a human, perhaps even a certified one, needs to sign off on the work.”

Mr. Capps’ purpose is to describe changes in the job market that are likely to occur in the “creative destruction” happening in the economy as A.I. technology is increasingly deployed. But further, the concept of trust and accountability articulates an important element of our approach to and philosophy of integrating A.I. into our teaching platform. Every one of our applications that incorporate A.I., and that is soon to be all of them, is designed such that A.I. work product is reviewed and monitored by the instructor before it is published in activities and assessments for students. There is, for example, no “Make me a test on positive and negative integers” button at Innovation. There is no “grade all my student’s papers please” button. All of the A.I. integrations are placed in the same user interface that would be used before there was A.I. A form is filled out, but the teacher needs to review and modify if necessary, or possibly even discard, the A.I. work product.

I realize that this design policy may place Innovation at a disadvantage in the marketplace, where competition between online educational platforms is intense. Instructors may be attracted to the one-button-does-it-all platform. But my conversations with educational professionals and with my students lead me to conclude that discerning subscribers will prefer Innovation precisely because teachers want to foster that trust by demonstrating that accountability through a platform whose very design and structure promotes them.

Structured AI Chat at Innovation

Innovation is proud to introduce its newest learning tool: structured AI chat.

We created this feature to empower students to practice conversations and engage with course material outside of class time. Although originally designed for world language learners, our AI chat works beautifully across disciplines, making it a versatile resource for content-based courses as well.

At Innovation, we believe technology should enhance learning in structured, meaningful ways. We call our applications “21st-century learning spaces“—they’re carefully designed to meet educational best practices and support student growth.

Teachers have two powerful ways to use AI chat: hosted or hostless.

The AI’s responses follow strict, teacher-defined parameters.

Hosted Chat: This mirrors Innovation’s original synchronous chat app where teachers facilitated real-time conversations between students. The key difference? Now, students can be paired with AI personas instead of classmates.

Hostless Chat: These are independent, self-paced chat assignments that students can complete on their own. But they’re not free-for-alls—the guardrails are still firmly in place:

The chat transcript is automatically recorded.

Students have a limited number of turns with the AI.

When teachers set up a chat, they set the boundaries of the conversation by limiting its length and defining the AI persona’s role.

Students are always responsible to start the conversation. When the chat starts, the assignment is clear and the AI’s interaction protocols are clearly stated.

The AI persona will keep the student on track even if they attempt to distract it with irrelevant questions.

Here, the student tried to distract by asking about sports and the AI brought the discussion back to task.

Once completed, Innovation provides an app to evaluate the quality of the student’s interactions in the chat.

This summer, I will add an AI grading assistant to help assess student work in a chat.

While this was initially envisioned for language students, its use in teaching content became clear. In this example, we set up these parameters for the AI persona:

By the way, teachers can optionally include “accessories” such as a PDF article or a video for students to review before discussion.

Here is how our chat with the Ai started for a critical discussion of the causes of the French revolution.

Our sample conversation went on like this:

Ever true to the parameters set for it, the AI persists in challenging the student to think more deeply and clearly define their points.

The AI chat feature at Innovation has great potential to enrich assignments and promote critical thinking in content courses and linguistic fluency in language classes. Try it out!

AI Tokens

Users may have noted the new AI Dashboard in their control panel at Innovation and the new pricing tiers that reference “AI tokens”. What are AI tokens and what can you do with them?

Innovation is integrating AI into every one of its applications now. It is not only a place to teach and learn, it is now a place to create high quality resources to support teaching and learning!

Subscribers to Innovation now have a certain allotment of tokens per month. A “token” is a fundamental unit of text that large language models (LLMs) use to process and generate language. It can be a word, a part of a word, or even a punctuation mark. When you interact with an AI model, your input (prompts) and the model’s output (responses) are broken down into tokens. The cost of using AI services is directly tied to the number of tokens processed. Generally, the more tokens used, the higher the cost. Because Innovation pays per token used via OpenAI, your monthly token allotment is designed to balance value and cost in a fair, transparent way.

During development, I maintained a logging script to see how many tokens I used to complete the various tasks. Unsurprisingly, essay grading is our most token-intensive task, averaging around 1881 tokens per interaction.
Vocabulary List Generation is the least token-intensive, consuming significantly fewer tokens.

Of course, token usage varies widely. Some teachers provide detailed outlines when asking the AI to generate tests or discussion prompts based on video or reading material. Others might use the AI heavily for grading essays or enabling student chat discussions. Your usage will shape how far your tokens go.

At the “pro” tier, you get 100,000 AI tokens / month. This resets every 30 days from the date you have a paid subscription. So what can you do with that? Well, based on my own usage (remember, I teach remotely part-time out of Innovation myself!) …

So, what does 100,000 AI tokens actually look like in practice for a teacher? Well, based on my own usage, it’s quite a lot of creative power at your fingertips! For example, you could grade around 53 essays (that’s right, those token-intensive ones!), or generate over 130 sets of test questions for your classes. Need a quick conversation starter for a foreign language class or a debate prompt? You could generate almost 180 conversations. And if you’re building vocabulary, you’re in luck – you could create an incredible 877 vocabulary lists with that many tokens! It really opens up a world of possibilities for creating high-quality teaching and learning resources.

✨ Let the AI Teaching Assistant Help you Generate Questions to Embed with Video and PDF.

Need comprehension or analysis questions for a PDF or a video? That can be incredibly time-consuming — but Innovation’s Teaching Assistant is here to help!

Just open the Étude app. Upload your PDF or paste your video embed code, then add it to your AI request configuration. In seconds, you’ll have high-quality questions based on your stimulus, crafted in the language and level of sophistication you choose.

Check it out and see how much time you’ll save!

✨ How To Score Writing Tasks Using AI

The AI Grading Assistant integrated into Innovation is a powerful tool designed to streamline the assessment of student writing tasks.

With just a click, you can apply one of the pre-installed rubrics or upload and use your own custom rubric. After a brief processing time, you’ll receive a detailed second opinion to help you balance and validate your own evaluation of the student’s work. The AI’s assessment is based both on the selected rubric criteria and on the advanced capabilities of a large generative AI model. As of this writing, Innovation uses GPT-4o for essay scoring, ensuring fast, consistent, and thoughtfully reasoned feedback.

✨ Make a Jeopardy Game with Innovation’s AI Integration!

My students have always loved playing Jeopardy! Oh, sorry, trademark issue… I mean “Jeopardy-like trivia games in class”.. 😏

Our game is called “Ventura”.

Innovation has had a fantastic app for generating such games for years now. As part of our integration of AI into our whole system, teachers can now employ our AI teaching assistant in generating Jeopardy games!

Just like for creating test questions, teachers configure the request to OpenAI in the Ventura game.

Use the teaching assistant to generate questions. Use them as-is or edit them. Add images or audio clips!

Holy cow, I remember the old days back in the 1990s when I would use PowerPoint to make a Jeopardy game for review day. It took a really long time to enter all the questions and answers even when I had a template game prepared!

Now I can make a game in 2-3 minutes! The test generator is using one of OpenAI’s contemporary models, so you can rely on the question quality.

Enjoy!

Interactive Activities at Innovation

Those of us who are teaching remotely are starved for interactive apps that let us engage our students beyond screen sharing! Innovation is constantly adding apps and modifications to meet those needs.

Live Sessions

“Live sessions” are interactive sessions that student “join” through the Innovation platform.

Multiple-choice, short answer, and media activity types can all be transformed into live sessions! Just select Live Session from the Create dropdown by your activity in the course playlist. Click Live Link and copy the URL. Send to students in, for example, the Zoom or Teams chat.

Once they join, the teacher host can present one question at a time and await student responses.

Once students respond, teacher is notified and can debrief by displaying responses anonymously.

During the media live session, the teacher presents a slideshow and periodically opens the system for responses, poses a question, and awaits replies.

Activity Monitoring

During composition writing, grammar activities, short answer, and Etude tasks, the teacher can activate the Monitor app. This is found in the Task dropdown for the activity in the playlist. As students work on the task, instructors can view their progress with a minimal time delay. Read more here.

Teachers can hide the student names and the correct answers so they can share the screen with students as they work.

Asynchronous Forums and Synchronous Chat

Read more about moderated synchronous chat here and here

Read more about asynchronous chat here

Give Students Quick Access

Innovation prides itself on the flexibility to plug in to any learning management system and to be easily integrated in video-conference remote lessons.

From the course playlist, you can send students a link to an activity by clicking the link icon on the right . Paste the link into the video conferencing chat window.

Send students a link to the assessment debriefing (the student’s assessment and correct answers to the task) using the icon below that.

From within the Live Sessions, the same functionality exists.