Using the LLM node in a workflow, you can initiate a conversation with an online LLM service, leveraging the capabilities of large models to assist in completing a series of business processes.

Since conversations with LLM services are often time-consuming, the LLM node can only be used in asynchronous workflows.

First, select a connected LLM service. If no LLM service is connected yet, you need to add an LLM service configuration first. See: LLM Service Management
After selecting a service, the application will attempt to retrieve a list of available models from the LLM service for you to choose from. Some online LLM services may have APIs for fetching models that do not conform to standard API protocols; in such cases, users can also manually enter the model ID.

You can adjust the parameters for calling the LLM model as needed.

It's worth noting the Response format setting. This option is used to prompt the large model for the format of its response, which can be text or JSON. If you select JSON mode, be aware of the following:
400 status code (no body) error.The array of messages sent to the LLM model can include a set of historical messages. Messages support three types:
For user messages, provided the model supports it, you can add multiple pieces of content in a single prompt, corresponding to the content parameter. If the model you are using only supports the content parameter as a string (which is the case for most models that do not support multi-modal conversations), please split the message into multiple prompts, with each prompt containing only one piece of content. This way, the node will send the content as a string.

You can use variables in the message content to reference the workflow context.

You can use the response content of the LLM node as a variable in other nodes.
