Prompt Management
Link Prompts to Traces
6 min
linking prompts to observability & tracing docid\ l z5ltur bd4f7v9mqkrc enables tracking of metrics and evaluations per prompt version it's the foundation of improving prompt quality over time after linking prompts and traces, navigating to a generation span in abv will highlight the prompt that was used to generate the response to access the metrics, navigate to your prompt and click on the metrics tab how to link prompts to traces python sdk decorators from abvdev import observe, get client abv = get client() @observe(as type="generation") def nested generation() prompt = abv get prompt("movie critic") abv update current generation( prompt=prompt, ) @observe() def main() nested generation() main() context managers from abvdev import get client abv = get client() prompt = abv get prompt("movie critic") with abv start as current generation( name="movie generation", model="gpt 4o", prompt=prompt ) as generation \# your llm call here generation update(output="llm response") if a guaranteed availability docid\ yeedfgywxhnbfmaan5v52 is used, no link will be created js/ts sdk manual observations import { abvclient } from "@abvdev/client"; import { startobservation } from "@abvdev/tracing"; const prompt = new abvclient() prompt get("my prompt"); startobservation( "llm", { input (await prompt) prompt, }, { astype "generation" }, ); context manager import { abvclient } from "@abvdev/client"; import { startactiveobservation } from "@abvdev/tracing"; const abv = new abvclient(); startactiveobservation( "llm", async (generation) => { const prompt = abv prompt get("my prompt"); generation update({ input (await prompt) prompt }); }, { astype "generation" }, ); observe wrapper import { abvclient } from "@abvdev/client"; import { observe, updateactiveobservation } from "@abvdev/tracing"; const abv = new abvclient(); const callllm = async (input string) => { const prompt = abv prompt get("my prompt"); updateactiveobservation({ prompt }, { astype "generation" }); return await invokellm(input); }; export const observedcallllm = observe(callllm); if a guaranteed availability docid\ yeedfgywxhnbfmaan5v52 is used, no link will be created metrics reference median generation latency median generation input tokens median generation output tokens median generation costs generation count median scores data model docid\ hai73gnnamxtypyenpkef value first and last generation timestamp