# Overview

  1. Design a Verdict Pipeline composed of Units (built-ins, custom, and arbitrary map operators) and Layers. These are building blocks that can be composed into arbitrary dependency graphs using the >> operator. We can customize the
    1. Model: provider/vLLM model to use, client-side rate-limiting, inference parameters, prefix caching, fallbacks, etc.
    2. Prompt: with templated references to the input data, upstream/previous Unit output, and instance fields.
    3. Schema: the well-specified input (optional), LLM response, and output (optional) to be extracted from the LLM response.
      1. Scale: discrete (1..5), continuous (0-1), categorical (yes/no), particularly useful for logprobs uncertainty estimation.
    4. Extractor: how we marshall the LLM response into the output schema — structured, regex, post-hoc, logprobs, etc.
  1. Specify a DatasetWrapper on top of Huggingface datasets or pandas DataFrames.

  2. Run the pipeline on the dataset, visualize its progress, and compute correlation metrics.