Skip to main content

Setting Up AI Features

Introduction

In order to use the AI features described in the previous section, you have to setup Flowable with some environment properties to properly connect to an AI service and use those features inside Flowable Design and Work.

To use the AI capability, you need to change your Flowable Version to 2025.1.01+.

info

To setup Flowable AI capabilities with Flowable 3.17 please refer to the previous documentation.

When using the model generation or the preview for the AI service you need to enable it for Flowable Design. For all features except the model generation the properties are required for Flowable Work.

note

In case you did not enable AI, or you don't have an AI license for your system, you won't see the AI features like model generation in the user interface.

Use OpenAI ChatGPT as the underlying AI service

With the Flowable Artifacts

When you use the Flowable artifacts, you can use the following properties to enable OpenAI:

For Flowable Design:

application.design.ai.type=openai
spring.ai.openai.api-key=<your OpenAI API Key>
spring.ai.openai.chat.options.temperature=0.3
spring.ai.openai.chat.options.model=gpt-4o

For Flowable Work:

application.ai.type=openai
spring.ai.openai.api-key=<your OpenAI API Key>
spring.ai.openai.chat.options.temperature=0.3
spring.ai.openai.chat.options.model=gpt-4o

Using the Responses API and Codex Models (from 2025.2.04+)

Starting with 2025.2.04+, Flowable supports OpenAI's Responses API, which is required for using Codex models such as gpt-5.2-codex. Codex models are recommended for AI-Assisted Modeling as they give superior results for both model generation and script writing.

To use a Codex model, configure the Responses API as follows:

For Flowable Design:

application.design.ai.type=openai
spring.ai.openai.api-key=<your OpenAI API Key>
spring.ai.openai.chat.options.temperature=1.0
spring.ai.openai.chat.options.model=gpt-5.2-codex
spring.ai.openai.chat.options.api-type=responses

For Flowable Work:

application.ai.type=openai
spring.ai.openai.api-key=<your OpenAI API Key>
spring.ai.openai.chat.options.temperature=1.0
spring.ai.openai.chat.options.model=gpt-5.2-codex
spring.ai.openai.chat.options.api-type=responses
note

The key difference is setting spring.ai.openai.chat.options.api-type=responses to use the Responses API instead of the default Chat Completions API. This property is required when using Codex models (e.g., gpt-5.2-codex), as these models are only accessible via the Responses API.

For non-Codex models (e.g., gpt-4o, gpt-4.1-mini), you do not need to set this property — the default Chat Completions API will be used automatically.

Temperature setting

Recent models such as gpt-5.2-codex often require the temperature to be set to 1.0. Using a lower temperature value (e.g., 0.3) may result in errors or degraded output with these models. When configuring a Codex model, always set spring.ai.openai.chat.options.temperature=1.0.

In case you want to test the agent model from Design, you need to enable this for Flowable Design:

flowable.design.agent.enable-agent-test=true

and Flowable Work:

flowable.agent.enable-agent-test=true

Building an Own Project

When you create your own project you can create it through the Flowable Initializr:

Or you can add the following:

Custom maven configuration
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>

In addition, you also need to add the Spring AI bom to your dependency management:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>1.0.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

For Flowable Work you also need to add the following dependency to your pom.xml:

<dependency>
<groupId>com.flowable.platform</groupId>
<artifactId>flowable-platform-ai</artifactId>
</dependency>
<dependency>
<groupId>com.flowable.platform</groupId>
<artifactId>flowable-agent-engine-rest</artifactId>
</dependency>

And for Flowable Design you also need to add the following dependency to your pom.xml:

<dependency>
<groupId>com.flowable.design</groupId>
<artifactId>flowable-design-ai</artifactId>
</dependency>

In addition, the following properties are required:

spring.ai.openai.api-key=<your OpenAI API Key>
spring.ai.openai.chat.options.temperature=0.3
spring.ai.openai.chat.options.model=gpt-4o

To use Codex models with the Responses API (from 2025.2.04+), add the api-type property as described above.

In case you want to test the agent model from Design, you need to enable this for Flowable Design:

flowable.design.agent.enable-agent-test=true

and Flowable Work:

flowable.agent.enable-agent-test=true

Use Anthropic Claude as the underlying AI service

With the Flowable Artifacts

When you use the Flowable artifacts, you can use the following properties to enable Anthropic:

For Flowable Design:

application.design.ai.type=anthropic
spring.ai.anthropic.api-key=<your Claude API Key>
spring.ai.anthropic.chat.options.model=claude-sonnet-4-6

For Flowable Work:

application.ai.type=anthropic
spring.ai.anthropic.api-key=<your Claude API Key>
spring.ai.anthropic.chat.options.model=claude-sonnet-4-6

If no model is explicitly configured, the Spring AI default model will be used. Available models include claude-sonnet-4-6, claude-opus-4-6, and claude-haiku-4-5-20251001.

note

Prior to 2025.2.03+ the value anthropic3 should be used instead of anthropic. From 2025.2.03+ both anthropic and anthropic3 are supported.

Using Claude with the Responses API (from 2025.2.04+)

Starting with 2025.2.04+, the Responses API can also be used with Anthropic Claude models. Combined with claude-opus-4-6, this gives results comparable to gpt-5.2-codex for AI-Assisted Modeling, making it an excellent alternative for both model generation and script writing.

For Flowable Design:

application.design.ai.type=anthropic
spring.ai.anthropic.api-key=<your Claude API Key>
spring.ai.anthropic.chat.options.temperature=1.0
spring.ai.anthropic.chat.options.model=claude-opus-4-6
spring.ai.anthropic.chat.options.api-type=responses

For Flowable Work:

application.ai.type=anthropic
spring.ai.anthropic.api-key=<your Claude API Key>
spring.ai.anthropic.chat.options.temperature=1.0
spring.ai.anthropic.chat.options.model=claude-opus-4-6
spring.ai.anthropic.chat.options.api-type=responses

Building an Own Project

When you create your own project you can create it through the Flowable Initializr:

Or you can add the following:

Custom maven configuration
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-anthropic-spring-boot-starter</artifactId>
</dependency>

In addition, you also need to add the Spring AI bom to your dependency management:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>1.0.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

For Flowable Work you also need to add the following dependency to your pom.xml:

<dependency>
<groupId>com.flowable.platform</groupId>
<artifactId>flowable-platform-ai</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-anthropic</artifactId>
</dependency>

And for Flowable Design you also need to add the following dependency to your pom.xml:

<dependency>
<groupId>com.flowable.design</groupId>
<artifactId>flowable-design-ai</artifactId>
</dependency>

In addition, the following properties are required:

spring.ai.anthropic.api-key=<your Claude API Key>
spring.ai.anthropic.chat.options.model=claude-sonnet-4-6

To use claude-opus-4-6 with the Responses API (from 2025.2.04+), add the api-type and temperature properties as described above.

In case you want to test the agent model from Design, you need to enable this for Flowable Design:

flowable.design.agent.enable-agent-test=true

and Flowable Work:

flowable.agent.enable-agent-test=true

Use Azure OpenAI as the underlying AI service

With the Flowable Artifacts

When you use the Flowable artifacts, you can use the following properties to enable OpenAI:

For Flowable Design:

application.design.ai.type=openai
spring.ai.openai.api-key=<your Azure OpenAI API Key>
spring.ai.openai.base-url=<Your Azure OpenAI Endpoint>/openai/deployments/<deploymentId>?version=<version>
spring.ai.openai.chat.completions-path=/chat/completions
spring.ai.openai.chat.options.temperature=0.3

For Flowable Work:

application.ai.type=openai
spring.ai.openai.api-key=<your Azure OpenAI API Key>
spring.ai.openai.base-url=<Your Azure OpenAI Endpoint>/openai/deployments/<deploymentId>?version=<version>
spring.ai.openai.chat.completions-path=/chat/completions
spring.ai.openai.chat.options.temperature=0.3

In case you want to test the agent model from Design, you need to enable this for Flowable Design:

flowable.design.agent.enable-agent-test=true

and Flowable Work:

flowable.agent.enable-agent-test=true

Building an Own Project

When you create your own project you can create it through the Flowable Initializr:

Or you can add the following:

Custom maven configuration
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>

In addition, you also need to add the Spring AI bom to your dependency management:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>1.0.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

For Flowable Work you also need to add the following dependency to your pom.xml:

<dependency>
<groupId>com.flowable.platform</groupId>
<artifactId>flowable-platform-ai</artifactId>
</dependency>
<dependency>
<groupId>com.flowable.platform</groupId>
<artifactId>flowable-agent-engine-rest</artifactId>
</dependency>

And for Flowable Design you also need to add the following dependency to your pom.xml:

<dependency>
<groupId>com.flowable.design</groupId>
<artifactId>flowable-design-ai</artifactId>
</dependency>

In addition, the following properties are required:

spring.ai.openai.api-key=<your Azure OpenAI API Key>
spring.ai.openai.base-url=<Your Azure OpenAI Endpoint>/openai/deployments/<deploymentId>?version=<version>
spring.ai.openai.chat.completions-path=/chat/completions
spring.ai.openai.chat.options.temperature=0.3
spring.ai.openai.chat.options.model=gpt-4o

In case you want to test the agent model from Design, you need to enable this for Flowable Design:

flowable.design.agent.enable-agent-test=true

and Flowable Work:

flowable.agent.enable-agent-test=true