In a single-node environment, plugins can typically fulfill requirements through in-process state, events, or tasks. However, in a cluster mode, the same plugin may run on multiple instances simultaneously, facing the following typical issues:
The NocoBase core provides various middleware interfaces at the application layer to help plugins reuse unified capabilities in a cluster environment. The following sections will introduce the usage and best practices of caching, synchronous messaging, message queues, and distributed locks, with source code references.
For data that needs to be stored in memory, it is recommended to use the system's built-in cache component for management.
app.cache.Cache provides basic operations like set/get/del/reset, and also supports wrap and wrapWithCondition to encapsulate caching logic, as well as batch methods like mset/mget/mdel.ttl to prevent cache loss upon instance restart.Example: Cache initialization and usage in plugin-auth
If in-memory state cannot be managed with a distributed cache (e.g., it cannot be serialized), then when the state changes due to user actions, the change needs to be broadcast to other instances via a sync signal to maintain state consistency.
sendSyncMessage, which internally calls app.syncMessageManager.publish and automatically adds an application-level prefix to the channel to avoid conflicts.publish can specify a transaction, and the message will be sent after the database transaction is committed, ensuring state and message synchronization.handleSyncMessage to process messages from other instances. Subscribing during the beforeLoad phase is very suitable for scenarios like configuration changes and schema synchronization.Example: plugin-data-source-main uses sync messages to maintain schema consistency across multiple nodes
Message broadcasting is the underlying component of sync signals and can also be used directly. When you need to broadcast messages between instances, you can use this component.
app.pubSubManager.subscribe(channel, handler, { debounce }) can be used to subscribe to a channel across instances; the debounce option is used to prevent frequent callbacks caused by repeated broadcasts.publish supports skipSelf (default is true) and onlySelf to control whether the message is sent back to the current instance.Example: plugin-async-task-manager uses PubSub to broadcast task cancellation events
The message queue is used to schedule asynchronous tasks, suitable for handling long-running or retryable operations.
app.eventQueue.subscribe(channel, { idle, process, concurrency }). process returns a Promise, and you can use AbortSignal.timeout to control timeouts.publish automatically adds the application name prefix and supports options like timeout and maxRetries. It defaults to an in-memory queue adapter but can be switched to extended adapters like RabbitMQ as needed.Example: plugin-async-task-manager uses EventQueue to schedule tasks
When you need to avoid race conditions, you can use a distributed lock to serialize access to a resource.
local adapter. You can register distributed implementations like Redis. Use app.lockManager.runExclusive(key, fn, ttl) or acquire/tryAcquire to control concurrency.ttl is used as a safeguard to release the lock, preventing it from being held indefinitely in exceptional cases.Example: plugin-data-source-main uses a distributed lock to protect the field deletion process
app.cache and app.syncMessageManager to avoid reimplementing cross-node communication logic in plugins.transaction.afterCommit (syncMessageManager.publish has this built-in) to ensure data and message consistency.timeout, maxRetries, and debounce values to prevent new traffic spikes in exceptional situations.With these capabilities, plugins can safely share state, synchronize configurations, and schedule tasks across different instances, meeting the stability and consistency requirements of cluster deployment scenarios.