watsonx + Elastic
Ever wondered how to tap into the power of Elasticsearch without wrestling with its entire Query DSL? Or how to let non-technical teammates ask questions like, “Which employees left this year?” and automatically run the right Elasticsearch SQL? In this comprehensive guide, you’ll learn exactly how to do both—combining the simplicity of SQL and the intelligence of Watsonx to handle everything from basic lookups to advanced text searches.
check the full guide
git clone <placeholder for public repo>- follow this guide to setup watsonx discovery to get the DB and kibana.
- we need following credentials in our
.env
ELASTIC_URL=""
ELASTIC_USERNAME=""
ELASTIC_PASSWORD=""-
Connect with watsonx.ai with following the steps here
-
Learn more about using watsonx.ai about here
-
we need following credentials in our
.env
IBM_CLOUD_API_KEY=""
WATSONX_ENDPOINT=""
WATSONX_PROJECT_ID=""For this walkthrough, we use the Kaggle Employee Dataset.
-
Create a virtual environment:
python3.11 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
Before we begine let's Learn about Elasticsearch SQL
Elasticsearch SQL provides an SQL-based interface to query Elasticsearch data. It allows querying Elasticsearch indices as if they were traditional database tables, enabling users familiar with SQL to leverage Elasticsearch without needing to master its native Query DSL syntax (Elasticsearch SQL Documentation).
- SQL Compatibility: Supports standard SQL operations like
SELECT,WHERE,GROUP BY, and aggregation functions. - Indexing and Metadata: Elasticsearch indexes serve as tables, while documents act as rows. Metadata dictionaries (
SHOW TABLES,DESCRIBE) allow schema exploration (Metadata Commands). - Date Handling: Date fields support various functions (
YEAR(),MONTH(),DATE_TRUNC(), etc.), providing flexible date manipulation (Date Functions). - Full-text Search: Special functions like
MATCH()enable powerful text search, integrating Elasticsearch's full-text capabilities within SQL queries (Match Queries).
- Search Applications: Quickly implement search functionality using SQL syntax, allowing fast and intuitive development.
- Analytics and Reporting: Easily aggregate, analyze, and visualize large datasets by integrating Elasticsearch SQL with Business Intelligence (BI) tools (Analytics with Elasticsearch SQL).
Traditional SQL queries are designed primarily for structured, relational data and rely on fixed schemas and exact-match logic (or basic pattern matching with LIKE). In contrast, Elasticsearch queries harness advanced text analysis, tokenization, and relevance scoring, making them far more flexible and efficient when handling unstructured or semi-structured data. With features like fuzzy matching, proximity and span queries, and dynamic query templating, Elasticsearch provides a richer, more nuanced search experience—enabling rapid, distributed searches across massive datasets that traditional SQL simply cannot match. This combination of power, flexibility, and scalability makes Elasticsearch queries significantly better suited for modern search and analytics use cases.
We are relying on LLMs to generate the queries from Natural Language, SQL syntax is readily available in the training dataset of these models, we can expect higher accuracy for a wide range of queries.
In the Indexing code
- Load environment variables (e.g., Elasticsearch credentials).
- Create an Elasticsearch client.
- Read the employee_data.csv, format date columns, and convert them to ISO 8601.
- Save the formatted data to both CSV and JSON.
- Define an index mapping for our
employee_dataindex in Elasticsearch. - Create the index if it doesn’t already exist.
- Bulk-load the data from the JSON file into Elasticsearch.
We have added Python class wrapper (WatsonxWrapper) that handles Watsonx LLM calls, loading environment variables for the credentials, and abstracting away the text generation and streaming logic. It includes synchronous inference generate_text and streaming inference generate_text_stream.
watsonx wrapper class.
This class will be our main interface to query the LLM for natural language to ES SQL conversion, as well as for automated metadata generation.
To make the our Natural Language to elastic SQL, we need create a metadata dictionary of out index, so that the LLM can generate elastic SQL. We’ll use Watsonx to generate natural language descriptions of each field, along with sample values, data types, etc. This is helpful for:
- Documenting our data automatically.
- Building data catalogs or self-service analytics tools. Creating Metdata Dictionary
After indexing, we can now harness Watsonx to interpret natural language queries and convert them into ES SQL. We then execute the generated SQL queries against Elasticsearch, returning the results in a tabular form (using pandas DataFrames).
code: nl2eql
- Load environment variables and set up the
Elasticsearchclient. - Import the WatsonxWrapper, custom prompts, and parameters.
- Define a list of questions.
- Generate ES SQL using the Watsonx model.
- Execute the SQL and display the results.
Checkout the prompts used here
We have created a simple streamlit app where you can test the results interactively
streamlit run streamlit_app.pyYohooo!!!!, Celebrate your journey upto this point.
So, to summarise, following this guide you will be able to do Advance Retreival Augmeneted Generated with the power of watsonx and Elastic SQL.
To Dive deeper into more elastic search sql features, check out elasticsearch_queries
If you would like to see the detailed LICENSE click here.
# Copyright IBM Corp. {2025}
# SPDX-License-Identifier: Apache-2.
