Over the past year, AI has gone from a buzzword to a practical tool that we're integrating into real FileMaker solutions. This isn't about chasing hype. The combination of large language models (LLMs) and FileMaker's existing scripting capabilities opens up genuine business value — automating tasks that previously required manual effort or expensive third-party tools.

Having completed AI Fundamentals certifications myself, I wanted to share the practical patterns we've developed for connecting FileMaker to AI services. These aren't theoretical — they're approaches we're using in production systems today.

Why AI in FileMaker?

FileMaker developers have always been pragmatists. We build solutions that solve real problems for real businesses. AI fits into that philosophy when it's applied to the right use cases: summarising long documents, classifying incoming data, extracting structured information from unstructured text, or helping users find records using natural language instead of exact field values.

The key insight is that modern AI services are just REST APIs. FileMaker has been able to call REST APIs for years using Insert from URL. If you've ever integrated with a payment gateway, a mapping service, or a cloud storage provider, you already have the foundational skills to integrate with AI.

Using Insert from URL with LLM APIs

The mechanics of calling an AI API from FileMaker are straightforward. You construct a JSON payload, set the appropriate cURL options, and use Insert from URL to make the request. Here's the general pattern for calling the OpenAI API:

Set Variable [ $url ; "https://api.openai.com/v1/chat/completions" ]
Set Variable [ $headers ;
  "-X POST" &
  " -H \"Content-Type: application/json\"" &
  " -H \"Authorization: Bearer " & $apiKey & "\""
]
Set Variable [ $body ; JSONSetElement ( "" ;
  [ "model" ; "gpt-4o" ; JSONString ] ;
  [ "messages[0].role" ; "system" ; JSONString ] ;
  [ "messages[0].content" ; $systemPrompt ; JSONString ] ;
  [ "messages[1].role" ; "user" ; JSONString ] ;
  [ "messages[1].content" ; $userPrompt ; JSONString ]
)]
Insert from URL [ Select ; With Dialog: Off ;
  Target: $response ;
  $url ;
  cURL options: $headers & " -d @$body"
]

The same pattern works for Anthropic's Claude API, Google's Gemini, or any other LLM provider. The endpoints and payload structures differ slightly, but the FileMaker scripting pattern is identical.

Structuring Prompts from FileMaker Fields

The real power comes from dynamically building prompts using data already in your FileMaker solution. Rather than asking a generic question, you can construct a prompt that includes context from the current record, related records, or even portal data.

For example, if you're building an invoice classification system, your prompt might pull the vendor name, line item descriptions, and amounts from FileMaker fields and ask the AI to categorise the expense. The system prompt establishes the rules (your chart of accounts, classification criteria), while the user prompt contains the specific invoice data.

A practical tip: store your system prompts in a dedicated table. This lets you iterate on prompts without modifying scripts, and you can maintain different prompt versions for testing. We typically include a "prompt version" field so we can track which version produced which results.

Parsing JSON Responses

FileMaker's native JSON functions — JSONGetElement, JSONListKeys, JSONListValues — make parsing API responses straightforward. For a standard chat completion response, extracting the AI's reply is a single function call:

Set Variable [ $reply ;
  JSONGetElement ( $response ;
    "choices[0].message.content"
  )
]

For more complex responses, such as when you ask the AI to return structured JSON, you can parse the nested structure using the same functions. We often ask the AI to return its response in a specific JSON format, which we then map directly to FileMaker fields.

Semantic Search: Beyond Keyword Matching

This is where things get genuinely interesting. Traditional FileMaker finds rely on exact or partial text matches. Semantic search uses embeddings — numerical representations of text — to find records that are conceptually similar to a query, even when the words don't match.

The process works in two stages. First, you generate an embedding for each record by sending its text content to an embeddings API (such as OpenAI's text-embedding-3-small). The API returns a vector — a list of numbers — which you store in a text field in FileMaker. Second, when a user searches, you generate an embedding for their query and compare it against the stored vectors using cosine similarity.

Cosine similarity can be calculated in FileMaker using a custom function or a script that iterates through the vector components. It's computationally intensive for large datasets, so we typically pre-filter using conventional finds before running the similarity calculation on a smaller result set.

Real Use Cases

Document Summarisation. We've built solutions where users can paste or import lengthy documents — contracts, reports, correspondence — and the system generates a concise summary stored alongside the original. This saves hours of manual review, particularly in legal and compliance contexts.

Data Classification. Incoming records (support tickets, enquiries, invoices) are automatically classified into categories using an LLM. The AI examines the free-text content and assigns one or more tags from a predefined list. This eliminates the inconsistency of manual tagging and ensures every record is categorised.

Smart Search. Instead of requiring users to remember exact field values or construct complex find requests, semantic search lets them type natural language queries like "projects we did for schools in Glasgow last year" and get relevant results based on meaning rather than exact matches.

Performance Considerations

API calls introduce latency. A typical LLM request takes one to five seconds, which is fine for on-demand operations but too slow for batch processing hundreds of records. For batch operations, we use a looping script that processes records sequentially with a short pause between calls to respect rate limits, and we run these as server-side scripts during off-peak hours.

Cost is another factor. LLM API calls are priced per token (roughly per word). For most business applications, the cost per request is fractions of a penny, but it can add up with high-volume batch operations. We recommend logging every API call — including the token count and cost — so you can monitor usage and set budget alerts.

Store API keys securely. Never hard-code them in scripts. We use a dedicated preferences table with restricted access privileges, and we rotate keys regularly.

Getting Started

If you're new to AI integration, start with a simple use case: take a text field from an existing solution and build a script that sends it to an LLM for summarisation. Once you're comfortable with the request-response pattern, you can explore more sophisticated applications like classification, extraction, and semantic search.

The barrier to entry is lower than you might expect. If you can build a FileMaker script that calls a REST API and parses JSON — skills that many FileMaker developers already have — you can integrate AI into your solutions today.

Interested in adding AI capabilities to your FileMaker solution? Let's discuss what's possible.

Book a Free Consultation