The Task API includes a suite of resilience operators that you compose directly onto any Task. Because Tasks are lazy, these operators describe how the task should behave when executed — they don’t trigger execution themselves.
.retry(count, options?)
Retries a failed task up to count times. Optionally applies a delay between attempts with exponential backoff.
| Parameter | Type | Description |
|---|
count | number | Maximum number of retry attempts after the first failure |
options.delay | number | Initial delay in milliseconds before the first retry (default: 0) |
options.factor | number | Multiplier applied to the delay on each successive retry (default: 1) |
import { Task } from 'ts-chas/task';
const task = Task.from(
() => fetch('/api/data').then(r => r.json()),
(e) => new Error(`Request failed: ${e}`)
);
// Retry up to 3 times with exponential backoff: 1s, 2s, 4s
const result = await task
.retry(3, { delay: 1000, factor: 2 })
.execute();
With factor: 2 and delay: 1000, the delays between retries are: 1000 ms, 2000 ms, 4000 ms.
.timeout(ms, onTimeout)
Fails with the value returned by onTimeout if the task has not completed within ms milliseconds.
| Parameter | Type | Description |
|---|
ms | number | Maximum allowed duration in milliseconds |
onTimeout | () => E | Returns the error value used when the timeout fires |
const result = await Task.from(() => slowDatabaseQuery())
.timeout(5000, () => new Error('Query timed out after 5s'))
.execute();
.circuitBreaker({ threshold, resetTimeout })
Protects downstream services by opening the circuit after threshold consecutive failures. While the circuit is open, the Task fails immediately with the string 'CIRCUIT_OPEN' without attempting execution.
After resetTimeout milliseconds, the circuit enters a half-open state and allows one attempt through. If it succeeds, the circuit closes. If it fails, it reopens.
| Option | Type | Description |
|---|
threshold | number | Number of consecutive failures before the circuit opens |
resetTimeout | number | Milliseconds to wait before trying again after opening |
const protectedTask = Task.from(
() => fetch('/api/payments').then(r => r.json()),
(e) => new Error(`Payment service error: ${e}`)
).circuitBreaker({ threshold: 5, resetTimeout: 30_000 });
// First 5 failures open the circuit
// For the next 30s, calls fail immediately with 'CIRCUIT_OPEN'
// After 30s, one call is let through; if it succeeds, the circuit resets
const result = await protectedTask.execute();
The circuit breaker state is held on the Task instance. Create one shared instance and reuse it across concurrent calls for the breaker to function correctly.
.throttle(concurrency)
Limits the number of simultaneous executions of the Task. Excess calls queue and wait for a slot to open.
| Parameter | Type | Description |
|---|
concurrency | number | Maximum number of concurrent executions allowed |
This is useful when calling a rate-limited API from code that launches tasks in parallel:
const apiTask = Task.from(
() => fetch('/api/rate-limited').then(r => r.json()),
() => new Error('API error')
).throttle(3); // at most 3 in-flight at a time
// All 10 calls share the same throttle — only 3 run concurrently
const results = await Promise.all(
Array.from({ length: 10 }, () => apiTask.execute())
);
Like .circuitBreaker(), the throttle state is held on the Task instance. Reuse the same instance for the limit to take effect across concurrent callers.
.delay(ms)
Adds a fixed delay before the Task begins executing.
| Parameter | Type | Description |
|---|
ms | number | Milliseconds to wait before running the task |
const delayedTask = Task.from(() => sendNotification())
.delay(500); // wait 500ms before executing
const result = await delayedTask.execute();
.withSignal(signal)
Binds an AbortSignal to the Task. If the signal is already aborted when .execute() is called, or fires while the Task is running, the Task fails immediately with the abort reason.
| Parameter | Type | Description |
|---|
signal | AbortSignal | The signal to listen to |
const controller = new AbortController();
// Cancel all in-flight tasks when the user navigates away
window.addEventListener('beforeunload', () => controller.abort());
const result = await Task.from(() => fetchReport())
.withSignal(controller.signal)
.execute();
You can also pass a signal directly to .execute() as a shorthand:
const result = await task.execute(controller.signal);
Combining resilience operators
Resilience operators compose naturally. A realistic pattern for a production API call:
import { Task } from 'ts-chas/task';
const resilientFetch = Task.from(
() => fetch('/api/data').then(r => r.json()),
(e) => new Error(`Fetch failed: ${e}`)
)
.retry(3, { delay: 500, factor: 2 })
.timeout(10_000, () => new Error('Timed out'))
.circuitBreaker({ threshold: 5, resetTimeout: 30_000 })
.fallback(Task.from(() => getCachedData(), () => new Error('Cache empty')));
const result = await resilientFetch.execute();
if (result.isOk()) {
console.log('Data:', result.value);
} else {
console.error('All strategies failed:', result.error.message);
}
The operators apply in the order they appear:
- The fetch is retried up to 3 times with backoff on failure.
- The entire retry sequence must complete within 10 seconds.
- If 5 consecutive attempts fail, the circuit opens and subsequent calls return immediately.
- If the circuit is open (or all retries are exhausted), the fallback Task runs instead.
Caching operators
.once()
Executes the Task exactly once and caches the Result for the lifetime of the instance. Every subsequent call to .execute() returns the cached result without re-running the underlying logic.
Best suited for one-time initialization work:
const loadConfig = Task.from(
() => fetch('/config.json').then(r => r.json()),
() => new Error('Config load failed')
).once();
// Only the first execute() triggers the fetch; the rest get the cached Result
await loadConfig.execute();
await loadConfig.execute(); // returns cached
await loadConfig.execute(); // returns cached
.memoize({ ttl?, cacheErr? })
In-memory memoization with an optional time-to-live. While the cache is valid, .execute() returns the stored Result without re-running the Task.
| Option | Type | Description |
|---|
ttl | number | Cache lifetime in milliseconds. Omit to cache forever. |
cacheErr | boolean | Cache Err results as well. Default: false. |
const userTask = Task.from(
() => fetch('/api/user').then(r => r.json()),
() => new Error('User fetch failed')
).memoize({ ttl: 60_000 });
await userTask.execute(); // fetches from network
await userTask.execute(); // returns in-memory cache (within 60s)
// after 60s, next call fetches again
Setting cacheErr: true caches error results too, which prevents hammering a failing service:
.memoize({ ttl: 5_000, cacheErr: true })
.cache(key, store, { ttl? })
Delegates caching to an external store via the TaskCache interface. Useful for sharing cached data across process restarts or multiple instances.
interface TaskCache {
get<T>(key: string): T | undefined | Promise<T | undefined>;
set<T>(key: string, value: T, ttl?: number): void | Promise<void>;
}
A plain Map satisfies TaskCache:
const store = new Map<string, any>();
const result = await Task.from(
() => fetch('/api/products').then(r => r.json()),
() => new Error('Products fetch failed')
)
.cache('products:all', store, { ttl: 5 * 60_000 }) // 5-minute TTL
.execute();
// Second call within 5 minutes reads from the Map
const cachedResult = await Task.from(
() => fetch('/api/products').then(r => r.json()),
() => new Error('Products fetch failed')
)
.cache('products:all', store, { ttl: 5 * 60_000 })
.execute();
For production use, implement TaskCache against Redis, IndexedDB, or any other store:
const redisStore: TaskCache = {
get: async (key) => {
const raw = await redis.get(key);
return raw ? JSON.parse(raw) : undefined;
},
set: async (key, value, ttl) => {
const serialized = JSON.stringify(value);
if (ttl) {
await redis.set(key, serialized, 'PX', ttl);
} else {
await redis.set(key, serialized);
}
},
};
const result = await Task.from(() => fetchUser())
.cache('user:42', redisStore, { ttl: 60_000 })
.execute();