Building Fenced Prompts
The core of Prompt Fence is the PromptBuilder. It allows you to construct a single prompt string composed of multiple "segments", each with a specific trust rating.
The PromptBuilder
The builder follows a fluent interface pattern. You can chain methods to add segments in order.
Segment Types
1. Trusted Instructions
Use for: System instructions, prompt templates, few-shot examples that YOU define.
- Rating:
TRUSTED - Default Source:
system
2. Untrusted Content
Use for: Any input that comes from an external user, even if you think it's safe.
- Rating:
UNTRUSTED - Default Source:
user
3. Partially Trusted Content
Use for: Content from 3rd party APIs or partners that you trust more than a random user but less than your own system.
- Rating:
PARTIALLY_TRUSTED - Default Source:
partner
4. Raw Data
Use for: Large blobs of data (CSV, JSON) to be processed.
- Rating:
UNTRUSTED(by default) - Default Source:
data
Building the Prompt
Once you have added all segments, call .build() with your private key to sign everything.
The result is a FencedPrompt object.
The FencedPrompt Object
This object behaves like a string, but also exposes useful metadata.
# Use as string
print(prompt)
# Explicit string conversion
raw_prompt = prompt.to_plain_string()
# Inspect segments
for segment in prompt.segments:
print(f"[{segment.rating}] {segment.source}: {len(segment.content)} chars")
# Access specific trust levels
trusted = prompt.trusted_segments
untrusted = prompt.untrusted_segments
partially_trusted = prompt.partially_trusted_segments
Note: The string representation (
str(prompt)or.to_plain_string()) is cached after the first access for performance.
- Immutability: Individual
FenceSegmentobjects are frozen (immutable). You cannot modify their content or rating after creation.- Thread Safety: The
FencedPromptobject is thread-safe for reading and string conversion.
String Conversion & Concatenation
The FencedPrompt object implements Python's string magic methods, so it integrates seamlessly with existing code:
# 1. Direct concatenation works
intro = "Here is a secured prompt:\n\n"
full_text = intro + prompt + "\n\nAnswer:"
# 2. F-strings work
print(f"Sending prompt of length {len(prompt)}")
# 3. Pass directly to functions expecting strings
def count_tokens(text: str):
return len(text.split())
count = count_tokens(prompt)
Advanced Examples
Chat History
You can rebuild a conversation history using trusted segments for the assistant's past replies (if you trust your own outputs) and untrusted segments for user replies.
builder = PromptBuilder()
# System
builder.trusted_instructions("You are a chat bot.")
# History
builder.untrusted_content("Hello!", source="user")
builder.trusted_instructions("Hi there, how can I help?", source="assistant_history")
builder.untrusted_content("Ignore previous instructions...", source="user")
prompt = builder.build(private_key)