verdict package
Submodules
verdict.config module
verdict.dataset module
- class verdict.dataset.DatasetWrapper(dataset: Dataset, input_fn: InputFn | None = None, columns: List[str] | None = None, max_samples: int | None = None)
Bases:
Iterator
[Tuple
[Dict
[str
,Any
],Schema
]]- dataset: pd.DataFrame
- static from_hf(dataset: Dict[str, 'Dataset'], input_fn: InputFn | None = None, columns: List[str] | None = None, max_samples: int | None = None, expand: bool = False) Dict[str, 'DatasetWrapper']
- static from_pandas(df: pd.DataFrame, input_fn: InputFn | None = None, columns: List[str] | None = None, split_column: str | None = None, max_samples: int | None = None)
- input_fn: InputFn
- static load(path: Path) DatasetWrapper
- max_samples: int | None
- samples: pd.DataFrame
- save(path: Path) None
verdict.extractor module
- class verdict.extractor.ArgmaxScoreExtractor
Bases:
TokenProbabilityExtractor
- extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema, Usage]
- class verdict.extractor.CustomExtractor
Bases:
RawExtractor
,ABC
- extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema, Usage]
- inject(unit) None
- abstract post_extract(output: str, logger: Logger) Dict[str, Any]
- class verdict.extractor.Extractor
Bases:
ABC
Represents a method of extracting a ResponseSchema from a provider call.
- Some examples:
function-calling / structured output via instructor
obtaining probability using logprobs on some token support (eg, yes, no)
having a second LLM extract from a raw response string
- abstract extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema | Iterator[Schema], Usage]
- classmethod format() str
- inject(unit) None
- streaming: bool = False
- class verdict.extractor.PostHocExtractor(policy_or_name: str | Model | List[str | Model] | None = None, retries: int = 1, **inference_parameters)
Bases:
StructuredOutputExtractor
- extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema | Iterator[Schema], Usage]
- extract_client_wrappers: List[ClientWrapper] | None = None
- format() str
- model_selection_policy: ModelSelectionPolicy | None = None
- class verdict.extractor.RawExtractor
Bases:
Extractor
- extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema | Iterator[Schema], Usage]
- field_name: str
- inject(unit) None
- class verdict.extractor.RegexExtractor(fields: Dict[str, str])
Bases:
CustomExtractor
- FIRST_FLOAT = '[+-]?\\d+(\\.\\d+)?'
- FIRST_INT = '[+-]?\\d+'
- fields: Dict[str, Pattern]
- post_extract(output: str, logger: Logger) Dict[str, Any]
- class verdict.extractor.SampleScoreExtractor
Bases:
TokenProbabilityExtractor
- extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema, Usage]
- class verdict.extractor.StructuredOutputExtractor
Bases:
Extractor
- extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema | Iterator[Schema], Usage]
- class verdict.extractor.TokenProbabilityExtractor
Bases:
Extractor
- extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema, Usage]
- field_name: str
- inject(unit) None
- scale: DiscreteScale
- stream(stream: bool = False) Self
- class verdict.extractor.Usage(in_tokens: int, out_tokens: int)
Bases:
object
- in_tokens: int
- is_unknown() bool
- out_tokens: int
- class verdict.extractor.WeightedSummedScoreExtractor
Bases:
TokenProbabilityExtractor
- extract(client_wrapper: ClientWrapper, prompt_message: PromptMessage, logger: Logger) Tuple[Schema, Usage]
- inject(unit) None
verdict.model module
- class verdict.model.Client(complete: Callable, model: verdict.model.Model, inference_parameters: dict[str, Any])
Bases:
object
- complete: Callable
- defaults(**inference_parameters_defaults) ContextManager[None, bool | None]
- inference_parameters: dict[str, Any]
- class verdict.model.ClientWrapper(model: Model, **inference_parameters)
Bases:
object
- encode(word: str) List[int]
- static from_model(model: Model, **inference_parameters) ClientWrapper
- inference_parameters: dict[str, Any]
- class verdict.model.Model
Bases:
ABC
- property char: str
- property connection_parameters: dict[str, Any]
- name: str
- rate_limit: RateLimitPolicy = Field(name=None,type=None,default=<dataclasses._MISSING_TYPE object>,default_factory=<dataclasses._MISSING_TYPE object>,init=False,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=None)
- rate_limiter: RateLimitPolicy | Dict[RateLimiter, str | RateLimiterMetric] | None = Field(name=None,type=None,default=None,default_factory=<dataclasses._MISSING_TYPE object>,init=True,repr=False,hash=None,compare=True,metadata=mappingproxy({}),kw_only=<dataclasses._MISSING_TYPE object>,_field_type=None)
- use_nonce: bool = False
- class verdict.model.ModelConfigurable
Bases:
ABC
- property model_selection_policy: ModelSelectionPolicy | None
- abstract set(attr, value) None
- via(policy_or_name: ModelSelectionPolicy | str | Model | List[str | Model], retries: int = 1, **inference_parameters) Self
- class verdict.model.ModelSelectionPolicy
Bases:
object
- property char: str
- static from_any(policy_or_name: ModelSelectionPolicy | str | Model | List[str | Model], retries: int = 1, **inference_parameters) ModelSelectionPolicy
- static from_name(model: str | Model, retries: int = 1, **inference_parameters) ModelSelectionPolicy
- static from_names(model_names: List[Tuple[str, int, dict]]) ModelSelectionPolicy
- get_clients() Iterator[ClientWrapper]
- class verdict.model.ProviderModel(name: str, use_nonce: bool = False, rate_limiter: verdict.util.ratelimit.RateLimitPolicy | Dict[verdict.util.ratelimit.RateLimiter, str | verdict.util.ratelimit.RateLimiterMetric] | NoneType = None)
Bases:
Model
- property connection_parameters: dict[str, Any]
- name: str
- rate_limiter: RateLimitPolicy | Dict[RateLimiter, str | RateLimiterMetric] | None = None
- use_nonce: bool = False
- class verdict.model.vLLMModel(name: str, api_base: str, api_key: str, rate_limiter: verdict.util.ratelimit.RateLimitPolicy | Dict[verdict.util.ratelimit.RateLimiter, str | verdict.util.ratelimit.RateLimiterMetric] | NoneType = None)
Bases:
Model
- api_base: str
- api_key: str
- property connection_parameters: dict[str, Any]
- name: str
- rate_limiter: RateLimitPolicy | Dict[RateLimiter, str | RateLimiterMetric] | None = None
verdict.prompt module
- class verdict.prompt.PromptMessage(system, user)
Bases:
NamedTuple
- system: str | None
Alias for field number 0
- to_messages(add_nonce: bool = False) List[Dict[str, str]]
- user: str
Alias for field number 1
- class verdict.prompt.PromptRegistry(name, bases, dct)
Bases:
type
- static extract_sections(template: str) Tuple[str | None, str | None, bool]
- static strip_prompt_template(prompt: str) str
- class verdict.prompt.Promptable
Bases:
ABC
- abstract populate_prompt_message(input: Schema, logger: Logger) PromptMessage
verdict.scale module
- class verdict.scale.BooleanScale(yes: List[str] = ['yes', 'Yes', 'YES'], no: List[str] = ['no', 'No', 'NO'])
Bases:
DiscreteScale
- pydantic_fields(key: str = 'output') Dict[str, Tuple[Any, FieldInfo]]
- token_support() List[str]
- value_mapping_fn(output: str) bool
- class verdict.scale.ContinuousScale(min_value: float, max_value: float, end_is_worst: bool = False)
Bases:
Scale
- prompt() str
- pydantic_fields(key: str = 'output') Dict[str, Tuple[Any, FieldInfo]]
- value_mapping_fn(output: float) float
- class verdict.scale.DiscreteScale(values: List[Any] | Tuple[Any, Any] | Tuple[Any, Any, int | None], end_is_worst: bool = False)
Bases:
Scale
- index(token: str) int
- prompt() str
- pydantic_fields(key: str = 'output') Dict[str, Tuple[Any, FieldInfo]]
- token_support() List[str]
- value_mapping_fn(output: str) Any
- values: List[Any]
verdict.schema module
- class verdict.schema.Schema
Bases:
BaseModel
,ABC
- conform(expected: Type[Schema], logger: Logger | None = None) Schema
Conform the current schema to the expected schema. For a missing field in this Schema (eg, in expected, but not in type(self)),
Check if there is a default factory for the field in expected. If so, use that.
Copy the first field in self that matches the type of the expected field.
- escape() str
- static generate_key(field_info: FieldInfo) str
- static infer_pydantic_annotation(obj: Any) Any
Infer the appropriate Pydantic annotation for a given object.
- classmethod is_empty() bool
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'frozen': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Args:
self: The BaseModel instance. context: The context.
verdict.transform module
- class verdict.transform.MapUnit(map_func: Callable[[Any | List[Any]], Any | List[Any]], **kwargs)
Bases:
Unit
- class InputSchema(*, values: Any | List[Any])
Bases:
Schema
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'frozen': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
We need to both initialize private attributes and call the user-defined model_post_init method.
- values: Any | List[Any]
- OutputSchema
alias of
ResponseSchema
- class ResponseSchema(*, values: Any | List[Any])
Bases:
Schema
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'frozen': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context: Any, /) None
We need to both initialize private attributes and call the user-defined model_post_init method.
- values: Any | List[Any]
- accumulate: bool = True
- execute(input: InputSchema) ResponseSchema
- lightweight: bool = True
- map_func: Callable[[Any | List[Any]], Any | List[Any]]
- class verdict.transform.MaxPoolUnit(fields: str | List[str] = [])
Bases:
FieldMapUnit
- fields: List[str]
Module contents
- class verdict.Block(name: str | None = None)
Bases:
Graph
[Unit
|Layer
],Node
,ModelConfigurable
- property char: str
- use_root: bool = True
- class verdict.Layer(nodes: Node | List[Node], repeat: int = 1, inner: str | Inner = Inner.NONE, outer: str | Outer = Outer.DENSE)
Bases:
Graph
[Node
],Node
,ModelConfigurable
Ordered list of units.
- property char: str
- clone() Self
Returns an associated deep copy of the node.
Used to create a new execution instance of a node.
- copy() Self
Returns a completely independent deep copy of the node.
Used in .from_sequence, etc.
- how_inner: Inner
- how_outer: Outer
- leaf_idx: List[int]
- property leaf_nodes: List[Node]
- order: List[Node]
- root_idx: List[int]
- property root_nodes: List[Node]
- sort() List[Node]
- class verdict.Pipeline(name: str = 'Pipeline')
Bases:
object
- checkpoint(path: Path)
- collect_outputs(executor: GraphExecutor, block_instance: Block) Tuple[Dict[str, Schema], List[str]]
- executor: GraphExecutor
- name: str
- plot(display=False) Image
- restore(path: Path)
- run(input_data: ~verdict.schema.Schema =, max_workers: int = 128, display: bool = False, graceful: bool = False) Tuple[Dict[str, Schema], List[str]]
- run_from_dataset(dataset: DatasetWrapper, max_workers: int = 128, experiment_config=None, display: bool = False, graceful: bool = False) Tuple['pd.DataFrame', List[str]]
- run_from_list(dataset: List[Schema], max_workers: int = 128, experiment_config=None, display: bool = False, graceful: bool = False) Tuple[Dict[str, Schema], List[str]]
- via(policy_or_name: ModelSelectionPolicy | str, retries: int = 1, **inference_parameters) Self
- class verdict.Unit(**kwargs)
Bases:
Node
,Task
,ModelConfigurable
,Promptable
,DataFlowSchema
[InputSchemaT
,ResponseSchemaT
,OutputSchemaT
]- property char: str
- clone() Self
Returns an associated deep copy of the node.
Used to create a new execution instance of a node.
- copy() Self
Returns a completely independent deep copy of the node.
Used in .from_sequence, etc.
- data: UserState
- property description: str | None
- execute(input: InputSchemaT) OutputSchemaT
- lightweight: bool = False
- model_selection_policy: ModelSelectionPolicy | None = None
- populate_prompt_message(input: Schema, logger: Logger) PromptMessage
- process(input: InputSchemaT, response: ResponseSchemaT) OutputSchemaT | ResponseSchemaT
- validate(input: InputSchemaT, response: ResponseSchemaT) None