#
Overview
- Design a Verdict Pipeline composed of Units (built-ins, custom, and arbitrary map operators) and Layers. These are building blocks that can be composed into arbitrary dependency graphs using the
>>
operator. We can customize the- Model: provider/vLLM model to use, client-side rate-limiting, inference parameters, prefix caching, fallbacks, etc.
- Prompt: with templated references to the input data, upstream/previous Unit output, and instance fields.
- Schema: the well-specified input (optional), LLM response, and output (optional) to be extracted from the LLM response.
- Scale: discrete (1..5), continuous (0-1), categorical (yes/no), particularly useful for logprobs uncertainty estimation.
- Extractor: how we marshall the LLM response into the output schema — structured, regex, post-hoc, logprobs, etc.
Specify a DatasetWrapper on top of Huggingface datasets or pandas DataFrames.
Run the pipeline on the dataset, visualize its progress, and compute correlation metrics.