- All events within a request batch share the same operation type.
- The cluster commits an entire batch at once, each event in series.
- The cluster returns a single reply for each unique request it commits.
- The cluster's reply contains results corresponding to each event in the request.
- Unless linked, events within a request succeed or fail independently.
To achieve high throughput, TigerBeetle amortizes the overhead of consensus and I/O by batching many operation events in each request. For the best performance, each request should batch as many events as possible. Typically this means funneling events through fewer client instances.
The maximum number of events per batch depends on the maximum message size
config.message_size_max) and the operation type.
(TODO: Expose batch size in the client instead).
In the default configuration, the batch sizes are:
Presently the client application is responsible for batching events, but only as a stopgap because this has not yet been implemented within the clients themselves.
Read more about how two-phase transfers work with each client.
API Layer Architecture
In some application architectures, the number of services that need to query TigerBeetle may:
Rather than each service connecting to TigerBeetle directly, application services can forward their requests to a pool of intermediate services (the "API layer") which can coalesce events from many application services into requests, and forward back the respective replies. This approach enables larger batch sizes and higher throughput, but comes at a cost: the application services' sessions are no longer linearizable, because the API services may restart at any time relative to the application service.
Queues and Workers
If you are making requests to TigerBeetle from workers pulling jobs from a queue, you can batch requests to TigerBeetle by having the worker act on multiple jobs from the queue at once rather than one at a time. i.e. pulling multiple jobs from the queue rather than just one.