Need comprehension or analysis questions for a PDF or a video? That can be incredibly time-consuming — but Innovation’s Teaching Assistant is here to help!
Just open the Étude app. Upload your PDF or paste your video embed code, then add it to your AI request configuration. In seconds, you’ll have high-quality questions based on your stimulus, crafted in the language and level of sophistication you choose.
The AI Grading Assistant integrated into Innovation is a powerful tool designed to streamline the assessment of student writing tasks.
With just a click, you can apply one of the pre-installed rubrics or upload and use your own custom rubric. After a brief processing time, you’ll receive a detailed second opinion to help you balance and validate your own evaluation of the student’s work. The AI’s assessment is based both on the selected rubric criteria and on the advanced capabilities of a large generative AI model. As of this writing, Innovation uses GPT-4o for essay scoring, ensuring fast, consistent, and thoughtfully reasoned feedback.
We are excited to be rolling out our massive upgrade to AI this month! Already, subscribers will notice the little purple buttons all over the site controls offering AI assistance with test question generation, grading student work, and tasks specifically geared toward teaching modern languages.
Subscribers will be invited to agree to the new terms of service when everything is up and running in June. Here is the text of that change:
AI-ENABLED FEATURES AND RESPONSIBILITIES
Innovation Assessments LLC now provides access to a variety of artificial intelligence (AI)–powered tools to enhance educational services. These may include, but are not limited to, automated test question generation, grading support for essays and short answers, rubric design assistance, writing prompt creation, and supervised chat-based discussion with AI for students. Users should be aware that AI-generated content may contain inaccuracies or reflect inherent biases, and human oversight is crucial.
Use of AI services is subject to the following terms:
Students may only access AI-powered chat or discussion tools under licensing of their teacher, and only if the teacher has enabled this feature for their activity.
All AI-generated content is provided “as-is” and may require human review. Teachers are responsible for reviewing all materials prior to use in assessments or instruction.
Essay grading by AI is advisory in nature. Final evaluation remains at the discretion of the teacher or institution.
Teachers and students may not use the platform’s AI features to submit or generate content that is harmful, discriminatory, or in violation of academic integrity policies (which include, but are not limited to, plagiarism and unauthorized assistance).
Innovation Assessments LLC reserves the right to monitor, restrict, or disable AI usage in cases of misuse, abuse, or usage patterns that negatively impact the platform’s performance or other users.
Innovation Assessments LLC will handle data generated through AI features in accordance with its Privacy Policy. Innovation Assessments LLC may update or modify its AI-powered features and functionalities over time. By using any AI-related features, you acknowledge the limitations of current AI technology and agree not to rely solely on AI-generated outputs for high-stakes educational decisions.
AI TOKEN USAGE
Access to AI features is governed by a monthly token system. Each account tier includes a set number of AI tokens per month, which may be used for supported features such as question generation, AI chat, grading support, and other automated tools. Tokens renew every 30 days from the date of paid subscription.
AI tokens do not roll over. Unused tokens expire at the end of each 30-day cycle.
Users may purchase additional token bundles if their monthly allotment is exhausted before renewal.
Token usage is calculated by the AI company and may vary based on the feature and the amount of text processed in both the request and the response. Requests with more text will consume more tokens, as will more detailed or lengthy AI-generated content. Higher-cost actions (e.g., full essay scoring) consume more tokens.
Token balances and consumption details are available to account administrators within the platform dashboard.
It is the user’s responsibility to monitor token use and purchase additional tokens as needed.
Innovation Assessments LLC reserves the right to modify token costs, tier allowances, or features covered by tokens with notice. Abuse of the token system may result in service restrictions or termination.
My students have always loved playing Jeopardy! Oh, sorry, trademark issue… I mean “Jeopardy-like trivia games in class”.. 😏
Our game is called “Ventura”.
Innovation has had a fantastic app for generating such games for years now. As part of our integration of AI into our whole system, teachers can now employ our AI teaching assistant in generating Jeopardy games!
Just like for creating test questions, teachers configure the request to OpenAI in the Ventura game.
Use the teaching assistant to generate questions. Use them as-is or edit them. Add images or audio clips!
Holy cow, I remember the old days back in the 1990s when I would use PowerPoint to make a Jeopardy game for review day. It took a really long time to enter all the questions and answers even when I had a template game prepared!
Now I can make a game in 2-3 minutes! The test generator is using one of OpenAI’s contemporary models, so you can rely on the question quality.
We are working feverishly to integrate openAI into Innovation this month! It is so exciting to see how this enhances our work! It’s like having a professional teaching assistant!
The new Test app has many enhancements over the previous app. Among the improvements (besides AI support) are:
It can have a mix of short answer and multiple-choice questions.
Teachers can import questions from the test bank, from other tests, create brand new questions, or use our AI question generator.
Teachers can edit the questions right in the new editor app, including attaching audio clips and images if needed.
The student test area has a new, modern layout and user-friendly design, including full security, support for international characters, and a feature to mark questions for later review.
For short answer questions, the teachers can avail themselves of the AI grading assistant! While we have left the algorithmic AI installed, using generative AI saves you the trouble of pre-training the grading assistant.
Innovation is excited to announce that we are working diligently to integrate OpenAI artificial intelligence into all aspects of the Innovation platform! We are developing a virtual teaching assistant that will help teachers generate “just right” learning activities and assessments.
Look for the purple button throughout the site …
… or the ✨ emoji to signal where AI integrations have been installed.
AI integration has been a goal at Innovation since before the release of the generative AI models in the early 2020s. Since 2018, Innovation has sported a pretty nifty algorithmic AI that helps grade short written work submissions and proctor student online activity. Now with the integration of OpenAI’s generative AI models (3.5 and 4o), we can truly realize the dream of a highly productive and efficient virtual teaching assistant!
When Innovation started out under a different name some 25 years ago, it was mainly a test generator (thus, the name). Since the pandemic, it has been meeting the needs of remote instructors and in-person classrooms alike not only in assessment but in content delivery in a variety of subjects, including for world language instruction.
Innovation is a place to create. It’s a place where teacher-authors can generate “just right” learning activities and assessments for their teaching context instead of textbook company generics. But we understand that secondary school teachers are even more busy than ever, seeking to meet the growing diversity of need and ever changing objectives and curricula. With Innovation’s AI teaching assistant, teachers can redirect their creative energies to the big picture of student learning and measurement.
Check out this simple how-to, for example, which illustrates how you can use the AI teaching assistant to help you generate a test.
Check back with us as the spring unfolds! At this writing, we have already installed openAI teaching assistants in:
The best way to learn to write well is to write and review the selected errors with an instructor for learning and practice. When I was teaching in-person, I would assign a composition in my French classes at the end of the unit to be done without any notes or references. I would then gather up the mistakes students made and we would commit to study them and learn to correct them. It is a great method to promote accurate and fluent writing in second language.
Teaching online, however, my work submissions from students in free-write compositions, even in one-on-one classes, were often AI-generated to such a high degree that the students really could not claim ownership. In one-on-one lessons, I did not always let on that I knew what they had done or sometimes I just made light of it. I could turn it into a useful exercise by asking the student to explain some tenses they used or some structures. But it is not the same. I felt like going into remote learning I had lost an important language training practice.
I have been enjoying success with a new kind of exercise for teaching composition. I will not claim to have invented it as surely someone, somewhere, has already done so. But I do say this method is not one I have seen or used before.
The student is presented with a series of prompts that constitute a composition in the target language of two to four paragraphs. The Innovation app presents them with one prompt at a time.
The prompt is a set of sentences that are in random order. One task for the student is to read these and arrange them in the best order. I design these using the unit theme vocabulary, so it is good practice in reading comprehension as well as in composing cohesive writing samples.
Often, especially for younger learners, I remove a word from each sentence and put it in a word bank. So now students have to not only rearrange the sentences, but they need to fill in the blanks based on context. Again, it’s a support for reading and composition.
Another strategy, especially for advanced learners, is to display the verbs as infinitives for the student to conjugate. In addition, I can remove the transitional phrases and ask students to supply them. Sometimes I include a prompt asking the student to add one sentences of their own, perhaps by providing an example of what it being discussed.
The prompts are displayed to students as images and not plain text, which creates an obstacle for those who would want to paste it into an AI to do the work for them. Displaying the prompt as an image file forces the student to write for themselves. An added benefit is that this promotes more lengthier writing for students who normally write way too briefly.
Innovation makes this easy! I select the “Single Short Answer task” from the Short Answer controls. I add each prompt with the answer key.
Then I add the screenshot of the prompt.
The short answer app at Innovation lets me place obstacles in the way of AI use and helps me generate practice exercises that help students develop their composition skills in the target language. Students have practice seeing and copying language in its standard and correct forms. They practice reading comprehension and the current theme vocabulary. They can rehearse transitional expressions and devising cohesive compositions. Prompting students to “Add one sentences of your own” prompts synthesis.
The tasks are easy and quick to score. From the course playlist, select Task, score One Student, and easily compare the student’s response to the answer key.
Although my preference is still for a free-write composition assignment, I can see many advantages to this one. I began developing this task with a mind to place obstacles in the way of student misuse of AI translators to do their work. I think I ended with an exercise that may arguably be actually better than free writing.
AI assistance and translators such as Deepl and Google Translate are very accurate and useful tools. When I assign my French students certain tasks, I expect they will use these tools to help them just as I would have expected students in-person thirty years ago to use a French-English dictionary to help with spelling and new words on certain tasks.
The problem is that the temptation to just have the AI generate the work is a strong one. It is important for remote instructors to place obstacles in the way of this practice, which not only undermines the student’s training but represents an ethical pitfall.
Imagine this scenario: an online AP Spanish class where major assessments are take-home tasks like essays and video-recordings of presentations. Using the traditional paradigm for this assignment, the student is given guidelines and due dates and a rubric with a graphic organizer. The instructor provides all that. Then the due date comes, all the work is in, and the instructor begins to review the work. What impressive vocabulary! What elegant grammar! And yet, reflecting on the spontaneous language generated by these same students in video-conference live sessions, it is hard to believe that this work could come from some of them.
All of my remote courses begin with a training film of sorts in which I explain the concept of academic integrity and ownership of one’s work submissions. I explain that it is expected that students will learn all of the new words they incorporate into their work submissions so as to maintain ownership of the task. I demonstrate using a translator properly and improperly.
A very useful strategy is one I have used since in-person days decades ago: simply ask the student the meanings of the words in their work that I suspect they do not know. In the remote learning context, this can be difficult to arrange, since there is no easy way to pull a student aside during class and conduct the interview about their work. That’s where Innovation comes in.
Sample student work that was too perfect for their demonstrated abilities.
It only takes a few minutes to select words and phrases from the student’s work submission that I believe they do not likely know. I select seven to ten words or phrases and I generate a short answer translation quiz using Innovation’s Quick Short Answer.
I enter a title, maybe set the category, and enter the words with English first, an equal sign, then the French.
Innovation’s app separates the word from its meaning by the equal signs. Once I have generated the quiz, I access the quiz Master app. I set the time limit to 1 minute for 7-10 words and I turn on the high security.
With the high security on, the assessment will submit and lock the student out if the student leaves the window to click on something else. Only the teacher can re-admit student to the quiz. The window resizes to full screen when the student starts the task and if they resize it smaller, the proctor records it. The proctor also records start time, how much time spent on the questions, whether text was pasted, and more.
As a final step, I lock the quiz up to only certain access codes. This allows control of how many times a student can restart the task. Simply select the Task dropdown from the playlist and then select Lock. Instructors can view the access codes from the Task dropdown or can generate one key by clicking the One Key button next to the title.
Sometimes, I will ask that the local facilitator proctor the student during the quiz so that they cannot look up the words on their own device.
So now what does one do with the results? When first introducing this strategy to students, I explain that it will not affect their grade “this time” and that it is a good reminder to make sure students have full “ownership” of their work. I may randomly select students for this verification, or if it’s a small class I may include it as a portion of their grade for a task and send one to everyone.
It’s not necessary for a student to get 100%. I usually take the quiz first to test it out; to see how many I can do in 1 minute. Even if they do not get 100%, I can learn a lot from their responses. For example, one student got 44% right of 8 and did so by skipping around. I interpret the skipped words as ones she forgot and intended to get back to later. Another student only got 33%. I interpret that as definitely being a sign that his work submission had too many looked-up words he did not know. I let him off with a warning this time and a reminder about academic integrity and ownership.
I once had the experience of taking over a class part way through the year. No structures had been in place to discourage inappropriate use of AI. The grades were all outrageously good. Some students were rarely in attendance and only handed in work that was graded. They did this work using AI, so it was no real effort. This really is a terribly corrupt system, especially given that there are students in nearby schools taking in-person classes who have to really do the work for their marks. There are honest students with good attendance who have lower grades for their honesty. It was an AP level course. Now, you might argue that the students would not possibly be ready for the AP exam if they took the course this way. One would think that would deter them from cheating. But upon reflection, it’s clear that having a 98 in an AP class on one’s transcript, even if one only scored 2 on the exam, could be valuable for college admissions considerations. So, no, it does not deter them.
Remote learning has enormous potential. I have great confidence in it. We instructors, we need to learn how to maintain the same standards as we had during in-person sessions. We cannot allow a situation to arise such that students in remote classes can just become pass-through vehicles for AI translators that do all their work. That situation would become a sort of scam. In part 2 of this topic, I will present a strategy for teaching composition in this new world of AI-assisted homework.
Those of us who are teaching remotely are starved for interactive apps that let us engage our students beyond screen sharing! Innovation is constantly adding apps and modifications to meet those needs.
Live Sessions
“Live sessions” are interactive sessions that student “join” through the Innovation platform.
Multiple-choice, short answer, and media activity types can all be transformed into live sessions! Just select Live Session from the Create dropdown by your activity in the course playlist. Click Live Link and copy the URL. Send to students in, for example, the Zoom or Teams chat.
Once they join, the teacher host can present one question at a time and await student responses.
Once students respond, teacher is notified and can debrief by displaying responses anonymously.
During the media live session, the teacher presents a slideshow and periodically opens the system for responses, poses a question, and awaits replies.
Activity Monitoring
During composition writing, grammar activities, short answer, and Etude tasks, the teacher can activate the Monitor app. This is found in the Task dropdown for the activity in the playlist. As students work on the task, instructors can view their progress with a minimal time delay. Read more here.
Teachers can hide the student names and the correct answers so they can share the screen with students as they work.
Innovation prides itself on the flexibility to plug in to any learning management system and to be easily integrated in video-conference remote lessons.
From the course playlist, you can send students a link to an activity by clicking the link icon on the right . Paste the link into the video conferencing chat window.
Send students a link to the assessment debriefing (the student’s assessment and correct answers to the task) using the icon below that.
From within the Live Sessions, the same functionality exists.
How to promote academic integrity in remote learning and in-person classrooms with 1:1 laptops
My interest in devising 21st century learning spaces really took off during the pandemic. The school district I was working in at the time had already moved to get all students in middle and high school Chromebooks and all classrooms had interactive projectors (“Smartboards” at the time). I had the advantage of having two perspectives on this, one as a former IT guy (I was district technology coordinator in a few schools in addition to my full-time teaching role and I was a certified network admin) and one perspective as a teacher. I knew we were just co-opting office productivity software for classroom use and it just was not cutting the mustard. Most notably, in our move to digital learning spaces, we lost some of the guardrails we used to maintain academic integrity.
What I mean by digital learning space, a term I use interchangeably with “21st century learning space”, is a software application hosted on the internet in which students conduct their studies and teachers conduct their lessons. My phrase “maintain academic integrity”, well, that mostly just means it was harder to keep kids from cheating.
This situation has come a long way since that time. Schools use a number of content filters, tracking apps, and screen monitoring software that is quite effective. But there are still gaps and I would put Innovation forward as a remedy to fill some of those gaps. Innovation plugs pretty easily into any LMS via convenient links.
Security Tier 1
The apps at Innovation fall into several functional tiers. Tier 1 entails just recording and reporting student activity on the apps. The Proctor is installed to monitor student activity as they interact with the digital learning space. It logs the following student actions:
started task
left the page
returned to the page
pasted in text
resized window
saved work
Tier 1 security on the short answer has some added records, such as notification when student deletes all of their response and the size of newly saved work compared to the answer it replaced. This was devised in response to a student who used to delete all his work and then claim he needed a retake because the system did not save. 🙄
In addition, the short answer task does not allow some other actions such as right click, spell-check, grammerly, activating dev tools, and the like.
Tier 1 security is applied by default on the Etude, short answer, grammar, world language composition, and media proctor. The media proctor records:
video started
video paused
left page
returned to page
duration engaged with video
At the tier 1 security level, the idea is to record detailed information about student engagement and to provide two things: 1) messaging to students showing what is being recorded and 2) reports for instructors who may or may not wish to take action on what the proctor saw. Just telling students that their actions were suspicious (like pasting in text) can serve to deter some mischief.
Tier 1 security is enhanced by the new Monitor app. This allows teachers to view student work progress on a task in real time (well, there’s a 10 second delay after student saves, but it’s still pretty quick). Monitor is available for short answer, grammar, world language composition, and Etudes. The Monitor displays all students who have saved work to the task. Select a student, and their work is shown. The proctor summary shows how many times students are doing each of the proscribed actions.
The multiple-choice app by default has security tier 2.
Security Tier 2
Tier 2 security is enabled by the teacher on the Master page for a task. the master page is accessed from the course playlist under the Task dropdown. Select “Modify test” from the controls at the top, and check the “High Security” checkbox.
When high security is enabled, the short answer task will close and submit responses if the student gives focus to any other page. The student will be locked out until they are formally re-admitted. Re-admit students from the course playlist using the Task dropdown in the controls on the right of the task.
In addition, short answer and multiple-choice tasks can be locked to certain single-use key codes. Once locked, teachers need to provide each student a different code from the list that was generated in order to allow access to the test. This limits attempts to take the test in situations where students have limited chances.
Further, teachers who need this level of security are encouraged to set time limits on the tasks. This will discourage cheating because it often takes time to look things up. In cases where some students get more time on task, you can set exceptions from the testing modification controls in Utilities. Go to Virtual Classroom and Testing Accommodations.
Tier 2 security can be enhanced by having a proctor with students to prevent accessing other devices. In addition, some schools have screen monitoring software like GoGuardian that can assist in monitoring. Perhaps this would be called “tier 3”?