Table of Contents

  1. Tasks Overview
  2. HTML Response Format Tasks
  3. JSON Response Format Tasks
  4. Using Model Tools
  5. Overriding the LLM Model
  6. Task Language Preference

Tasks Overview

Raif::Task is designed for single-shot LLM operations. Each task defines a prompt, system prompt, and response format (:html, :text, or :json). Use the generator to create a new task, which you’ll call via Raif::Task.run.

For example, say you have a Document model in your app and want to have a summarization task for the LLM:

rails generate raif:task DocumentSummarization --response-format html

This will create a new task in app/models/raif/tasks/document_summarization.rb.

If you don’t specify a response format, it will default to :text, which expects a plaintext response from the LLM. If you want the LLM to return a formatted response, you can specify the response format as :html or :json. Make sure to include instructions in your prompt to the LLM to return the response in the specified format.

If you specify a response format, Raif will automatically parse the response for you, which is described in more detail in the Response Formats docs. You can utilize the parsed_response method to get the parsed response or raw_response to get the raw, unprocessed response.

HTML Response Format Tasks

Below is an example task that uses the :html response format:

class Raif::Tasks::DocumentSummarization < Raif::ApplicationTask
  llm_response_format :html # :html, :text, or :json
  llm_temperature 0.8 # optional, defaults to 0.7
  llm_response_allowed_tags %w[p b i div strong] # optional, defaults to Rails::HTML5::SafeListSanitizer.allowed_tags
  llm_response_allowed_attributes %w[style] # optional, defaults to Rails::HTML5::SafeListSanitizer.allowed_attributes

  # Any attr_accessor you define can be included as an argument when calling `run`. 
  # E.g. Raif::Tasks::DocumentSummarization.run(document: document, creator: user)
  attr_accessor :document
  
  def build_system_prompt
    sp = "You are an assistant with expertise in summarizing detailed articles into clear and concise language."
    sp += system_prompt_language_preference if requested_language_key.present?
    sp
  end

  def build_prompt
    <<~PROMPT
      Consider the following information:

      Title: #{document.title}
      Text:
      ```
      #{document.content}
      ```

      Your task is to read the provided article and associated information, and summarize the article concisely and clearly in approximately 1 paragraph. Your summary should include all of the key points, views, and arguments of the text, and should only include facts referenced in the text directly. Do not add any inferences, speculations, or analysis of your own, and do not exaggerate or overstate facts. If you quote directly from the article, include quotation marks.

      Format your response using basic HTML tags.

      If the text does not appear to represent the title, please return the text "Unable to generate summary" and nothing else.
    PROMPT
  end

end

And then run the task (typically via a background job):

document = Document.first # assumes your app defines a Document model
user = User.first # assumes your app defines a User model
task = Raif::Tasks::DocumentSummarization.run(document: document, creator: user)
summary = task.parsed_response

JSON Response Format Tasks

If you want the LLM to return a JSON response, use llm_response_format :json in your task.

If you’re using OpenAI, Raif will set the response to use JSON mode. If you define a JSON schema using the json_response_schema method, it will trigger utilization of OpenAI’s structured outputs feature. If you’re using Anthropic, Raif will insert a tool for Claude to use to generate a JSON response.

rails generate raif:task WebSearchQueryGeneration --response-format json

This will create a new task in app/models/raif/tasks/web_search_query_generation.rb:

module Raif
  module Tasks
    class WebSearchQueryGeneration < Raif::ApplicationTask
      llm_response_format :json

      attr_accessor :topic

      json_response_schema do
        array :queries do
          items type: "string"
        end
      end

      def build_prompt
        <<~PROMPT
          Generate a list of 3 search queries that I can use to find information about the following topic:
          #{topic}

          Format your response as JSON.
        PROMPT
      end
    end
  end
end

Using Model Tools

Raif::Task supports the use of model tools. Any model tool that is included in the task’s available_model_tools array will be available to the LLM when the task is run.

You can provide the tools at runtime:

Raif::Tasks::DocumentSummarization.run(
  document: document, 
  creator: user, 
  available_model_tools: ["Raif::ModelTools::GoogleSearch"]
)

Or if you want all instances of a task to have the tools available by default:

class MyTask < Raif::Task
  before_create ->{
    self.available_model_tools << "Raif::ModelTools::GoogleSearch"
  }
end

Overriding the LLM Model

By default, Raif::Task’s will use the model specified in Raif.config.default_llm_model_key. You can override in various places.

By passing a different model key to the run method:

task = Raif::Tasks::DocumentSummarization.run(
  document: document,
  creator: user,
  llm_model_key: "open_ai_gpt_4_1"
)

Or by overriding default_llm_model_keyin the task definition:

class MyTask < Raif::Task
  def default_llm_model_key
    if Rails.env.production?
      :open_ai_gpt_4_1
    else
      :open_ai_gpt_4_1_mini
    end
  end
end

Task Language Preference

Tasks support the ability to specify a language preference for the LLM response. When enabled, Raif will add a line to the system prompt that looks something like:

You're collaborating with teammate who speaks Spanish. Please respond in Spanish.

You can trigger this behavior in a couple ways:

  1. If the creator you pass to the run method responds to preferred_language_key, Raif will use that value.

  2. Pass requested_language_key as an argument to the run method:

    task = Raif::Tasks::DocumentSummarization.run(
      document: document,
      creator: user,
      requested_language_key: "es"
    )
    

The current list of valid language keys can be found here.


Read next: Conversations