Skip to main content

Apply AI / LLMs analysis on your submission data

Run AI prompts on the submissions that come in

Zack Schwartz avatar
Written by Zack Schwartz
Updated over a week ago

OpenWater AI, introduced in December 2025, is a powerful feature that lets you run custom AI prompts against your submission data in a highly flexible way. This enables a wide range of use cases, from summarization and translation to scoring, validation, and anonymization.

Below are a few example use cases. These are only samples. You can design your own prompts to fit your program’s needs.

  • Summarize a submission in 200 characters

  • Highlight two or three points that make this submission stand out

  • Identify areas in a submission that require additional scrutiny

  • Translate submitted content into a target language

  • Check whether a submission addresses all required talking points

  • Evaluate writing quality on a scale of 1–10, with 10 being the highest

  • Determine whether a submission violates specific eligibility guidelines

  • Score a submission and provide commentary based on judging criteria (AI-assisted judging)

  • Identify claims that require verification before publication

  • Remove identifying or personally identifiable information from a narrative

  • Summarize judge comments in a clear and professional tone

OpenWater AI is currently an opt-in feature. Please contact your support representative to have it enabled on your account.

AI usage is measured using tokens, a standard pricing model for AI services. Tokens roughly correspond to the number of words processed as input and generated as output. Each customer receives an initial allocation of 10 million tokens at no cost for a limited time offer during this beta period, with the option to purchase additional tokens if needed.

After OpenWater AI is enabled on your instance, you can use it immediately on any Submission Round. To get started, click Manage on your program and then Round Settings and then click AI Prompts.

Click Add New AI Prompt. Give your prompt a clear, identifiable name and an alias. Aliases are important because they can be reused throughout the system, including judge views, galleries, email blasts, and more.

For Execution Mode, choose whether the prompt should always run, be temporarily disabled, or run only when a submission meets specific conditions.

System Prompt vs User Prompt

Each AI Prompt is made up of two parts: a System Prompt and a User Prompt. They serve different purposes and should be written differently.

System Prompt

The System Prompt sets the rules and behavior for the AI. Think of it as instructions that define how the AI should think and respond, not the specific task itself.

Use the System Prompt to:

  • Define tone (formal, neutral, friendly, strict, etc.)

  • Set constraints (length limits, formatting rules, scoring scales)

  • Establish the role of the AI (judge, editor, translator, evaluator)

Example System Prompt:

You are an impartial evaluator. Respond concisely. Do not invent information. If the content is unclear, say so.

Tip:
If you’re not sure what to write, it’s perfectly fine to go to ChatGPT and describe what you’re trying to accomplish, then ask it to generate a good system prompt for you.


User Prompt

The User Prompt is the actual task you want the AI to perform. This is where you reference submission data and ask specific questions.

Use the User Prompt to:

  • Ask for summaries, scores, translations, or analysis

  • Reference submission content using variables

  • Be explicit about what output you want

Example User Prompt:

Summarize the content below in 200 characters:

{Applicant Narrative}

Inserting Variables

You can insert submission data into either the System Prompt or User Prompt using Insert Variable.

  1. Click Insert Variable

  2. Select the field you want (Applicant fields, submission fields, etc.)

  3. Insert it into your prompt text

Variables are replaced with real submission data at runtime.

Example:

Summarize the content below:

{Essay Response}

Parsing Uploaded Files

If your submission includes uploaded files, you can have the AI read them.

  1. Check Parse Uploaded File

  2. Include the {MarkdownContent} variable in your prompt

The system will attempt to extract text from:

  • PDFs

  • Word documents

  • Images (OCR)

Example User Prompt:

Summarize the document below:

{MarkdownContent}

Important notes:

  • File parsing is best-effort, not guaranteed

  • Formatting and accuracy may vary depending on the file

  • Always design prompts defensively in case content is incomplete

WYSIWYG vs Template Rendering

Render User Prompt from WYSIWYG

This is the default and recommended option for most users.

  • Write prompts directly in the editor

  • Insert variables visually

  • Best for straightforward, readable prompts

Use this unless you know you need more control.

Render User Prompt from Template (Advanced)

This option is intended for advanced users only.

  • Allows conditional logic (if/else)

  • Enables entirely different prompts based on submission attributes

  • Useful for complex workflows where prompt behavior changes dynamically

If you don’t explicitly need conditional logic, stick with WYSIWYG.


Choosing the AI Model

By default, your AI Prompt will use the default model configured in the System Features area.

If needed, you can override this on a per-prompt basis by selecting a different model in the Model Name dropdown at the bottom of the screen.

This allows you to:

  • Use lighter models for simple tasks

  • Use stronger models for judging or complex analysis

  • Experiment without changing system-wide settings


Testing

The easiest way to test the quality of your prompts is to load up a test submission as an administrator and then click AI Prompt Results.

Then click Rerun AI Prompts.

You can keep your AI Prompt configuration open in another window, make adjustments, and repeatedly rerun the prompt against test submissions until you’re getting the results you want.

If you need to rerun prompts across all submissions, go to Round Settings → AI Prompts and click Rerun AI Prompts. Be mindful of your AI token usage. This is especially useful if submissions were collected before AI Prompts were enabled and you decide to apply them retroactively.


Reports

You can export the results from the AI prompts to excel via your reports. Load up any of your reports via Submissions > Reports and then tab over to CSV/XLS Exports.

Click Subset of Fields and then Configure Fields. Then choose which AI Prompt fields you would like to export to Excel.


Insert Variables In Many Places

Beyond reports, it could be useful to include the AI results in various user interfaces that your stakeholders interact with. For example in Round Settings > Evaluation Layout, you may want to insert an AI result so your judges can see it as they evaluate the submission.

Renders like so:

Other areas where you may wish to insert variables include:

  • Email blasts

  • Public submission and session galleries

  • Showing scores and comments to applicants


AI Models

OpenWater recognizes that different models are optimized for different tasks. You may want to try out the different models with the prompts that you intend to write to see which ones perform better for your use cases.

The technology stack that OpenWater uses for its AI is powered by Cloudflare. Cloudflare offers a slew of open source models. You can set a default model and optionally override the default on a per prompt basis as defined above. To change your default model, go to System Settings and then System Features.

Under OpenWater Managed, we currently offer three AI models that provide a balanced mix of intelligence, speed, and cost. As of this writing, the available models are:

  • meta/llama-4-scout-17b-16e-instruct

  • mistral/mistral-7b-instruct-v0.2

  • ibm-granite/granite-4.0-h-micro

OpenWater reserves the right to update the list of available models as needed. A good way to evaluate these models is to visit the Cloudflare AI Playground, select a model, and test system and user prompts similar to what you plan to use within your OpenWater platform.

It is your responsibility to ensure that the model you choose complies with your organizations policies on AI usage.

"These models don't work for me"

That's okay. You can switch off OpenWater Managed and choose to provide your own OpenAI API Key.

You can insert a custom list of OpenAI models that you wish to make available to your staff. The list has to be the exact model name as defined in the OpenAI documentation which you can find by clicking on the model and expanding the dropdown to see the real model name.

Be mindful that you are responsible for paying for your own OpenAI api keys and token consumption.

Did this answer your question?