What is this feature?
Prompt Mode lets you instruct the system how to extract a specific piece of information from a document by writing plain instructions in plain English, the same way you’d explain the task to a colleague.
Instead of building a rigid pattern (like a regex) that only matches an exact format, you describe what you’re looking for and the Veryfi AI figures out where it is, even if the wording, layout, or position varies from document to document.
The extracted value lands in a custom field you define, and it’s available immediately for routing, review, or export.
Business Rules with Prompt mode to extract custom data are currently available for the Receipts/Invoices API (the/documents endpoint) only.
This is a Premium Feature - contact support to get access or request quote.
What problems does it solve?
Prompt Mode is the right tool when you need to extract a value that isn't covered by the standard /document api schema, or when the way a field currently behaves doesn't match what your process actually needs.
Quick Set-up Guide
Go to the Business Rules section under Data Extraction
Create a new Business Rule
Define Condition. Set the trigger conditions: document source, document type, confidence score thresholds, or other criteria, including LLM- prompt and Custom Regex.
Choose “LLM Extract” as the Action
Create your custom field - This is where the extracted value will be stored.
Give it a good name
Select field type

Write your instruction. Describe what to extract in plain language. Keep it focused on one piece of information. See the best practices section below for prompt guidance (below)
Add examples (optional but recommended). Include 1 to 3 examples of what a correct output looks like. This anchors the model and reduces variability.
Set Fallback Action
What powers Prompt Mode?
Prompt Mode runs on AnyDocs, Veryfi's own in-house AI model built specifically for document understanding. Unlike general-purpose language models, AnyDocs is fine-tuned for data extraction and trained to work across a wide range of document types and structures. It combines a language model with a vision layer, so it can read and interpret not just text but also tables, images, logos, and complex layouts. When you write a plain-language instruction, AnyDocs is what reads the document and figures out the answer.
In short, any time a Business Rules with Prompt mode runs, it uses Veryfi AnyDocs LLM. API Documentation: https://docs.veryfi.com/api/anydocs/process-a-A-doc/
What needs to be enabled?
Two things must be active for your account before you can use this feature:
Custom fields are not enabled by default. You’ll need to reach out to support to have them turned on for your account. Once enabled, you can create custom fields directly in the Business Rules section or reuse existing ones.
Business Rules must be enabled for your workspace. Check with your admin or contact support if the “LLM Extract” action isn’t visible in your rule configuration.
Best practices for LLM Prompts
Write instructions the way you’d explain the task out loud. “Find vendor preferred payment method for this Invoice. Limit the options for response: ACH, Wire, or Check. If you can’t find it, return null.” is better than a vague single-word label.
Keep instructions concise. Long, elaborate prompts don’t always perform better and can slow things down. Aim for clarity over length.
Be specific about the output format. If you want a date, say what format. Ambiguity in the instruction leads to inconsistency in the output.
Anchor with examples. Adding even one example of the expected output significantly improves consistency, especially for fields with a defined set of valid values.
Test with real variation. Pull a sample of 15 to 20 actual documents that represent the range of layouts, vendors, and phrasings you expect. Review the results and refine the instruction until coverage is consistently high.
Learn more about Best Practices for AnyDocs Prompts when you create a Blueprint.
Limitations to be aware of
It’s not deterministic. Unlike a regex, the model can interpret differently across runs or document versions. Use examples and clear output constraints to reduce this.
It works on ocr_text. If OCR quality is poor on a particular document, extraction quality will reflect that.
Latency is higher than that of rule-based extraction. As mentioned above, anytime a Rule with Prompt is being called at calls Veryfi Adocs LLM, it adds processing time. For high-volume, time-sensitive pipelines, configure async processing with webhooks rather than relying on synchronous response times. [ If latency is critical, consider Splitting processing into stages ]
Custom fields only. Results can only be written to a custom field. Standard system fields are not writable via this method.




