In Part 1, we explored the architecture and core components of Azure Durable Functions, understanding when to choose them over Logic Apps and the benefits they bring to enterprise integration scenarios. Now it’s time to get hands-on.
This post dives into implementation patterns with code examples. You’ll see exactly how to build resilient, scalable workflows that handle real-world enterprise challenges. Each pattern addresses specific orchestration needs and can be combined to build sophisticated integration platforms.
Enterprise Integration Patterns with Durable Functions
The patterns below represent proven solutions to common enterprise integration challenges. Let’s explore each one with practical implementations.
Pattern 1: Function Chaining
The Function Chaining pattern executes a series of functions in a specific order. The output of one function serves as the input for the next, creating a linear, sequential workflow.
Use Case: Order Fulfillment Workflow
An order fulfilment process is a classic example of function chaining. Each step, from payment to shipping, must be completed in order.
Workflow Steps:
- Process Payment: The first function processes the order payment
- Update Inventory: The second function, using the successful payment confirmation, updates the product inventory
- Ship Order: The final function triggers the shipping process
Implementation
Orchestrator Function
[FunctionName("OrderFulfillmentOrchestrator")]
public static async Task<string> RunOrderFulfillment([OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<Order>();
// Step 1: Process Payment
await context.CallActivityAsync("ProcessPayment", order.PaymentDetails);
// Step 2: Update Inventory
await context.CallActivityAsync("UpdateInventory", order.Items);
// Step 3: Ship Order
var trackingNumber = await context.CallActivityAsync<string>("ShipOrder", order.ShippingDetails);
return $"Order fulfilled with tracking number: {trackingNumber}";
}
Activity Functions
[FunctionName("ProcessPayment")]
public static async Task ProcessPayment([ActivityTrigger] PaymentDetails paymentDetails, ILogger log)
{
var paymentService = new PaymentService();
await paymentService.ProcessAsync(paymentDetails);
log.LogInformation($"Payment processed for amount: {paymentDetails.Amount}");
}
[FunctionName("UpdateInventory")]
public static async Task UpdateInventory([ActivityTrigger] List<OrderItem> items, ILogger log)
{
var inventoryService = new InventoryService();
await inventoryService.DecrementStockAsync(items);
log.LogInformation($"Inventory updated for {items.Count} items");
}
[FunctionName("ShipOrder")]
public static async Task<string> ShipOrder([ActivityTrigger] ShippingDetails shippingDetails, ILogger log)
{
var shippingService = new ShippingService();
var trackingNumber = await shippingService.CreateShipmentAsync(shippingDetails);
log.LogInformation($"Order shipped with tracking: {trackingNumber}");
return trackingNumber;
}
Client Function
[FunctionName("StartOrderProcessing")]
public static async Task<IActionResult> HttpStart(
[HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
[DurableClient] IDurableOrchestrationClient starter)
{
var order = await req.GetJsonAsync<Order>();
string instanceId = await starter.StartNewAsync("OrderFulfillmentOrchestrator", order);
return starter.CreateCheckStatusResponse(req, instanceId);
}
Pattern 2: Fan-Out/Fan-In (Scatter-Gather)
The Fan-Out/Fan-In pattern, also known as Scatter-Gather, runs multiple functions in parallel and then waits for all of them to complete. This is used to distribute work and aggregate results, significantly speeding up processes that can be broken down.
This pattern is essential for performance optimization optimisation in enterprise scenarios where multiple independent operations may run simultaneously. It can reduce total processing time from minutes to seconds in data-intensive workflows.
Use Case: Parallel Order Validation
After a customer places a large order, it must be validated against multiple systems simultaneously before being approved.
Workflow Steps:
- Fan-Out (Scatter): The orchestrator triggers separate activity functions to check the customer’s credit, verify inventory across warehouses, and perform fraud detection, all in parallel
- Fan-In (Gather): The orchestrator waits for all validation tasks to complete
- Process Results: Once all checks return, it aggregates the results to decide whether to approve or reject the order
Implementation
Orchestrator Function
[FunctionName("ParallelOrderValidationOrchestrator")]
public static async Task<string> RunParallelValidation([OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<Order>();
// Scatter: Start multiple validation tasks in parallel
var validationTasks = new List<Task<bool>>
{
context.CallActivityAsync<bool>("ValidateCustomerCredit", order.CustomerId),
context.CallActivityAsync<bool>("CheckInventoryAcrossWarehouses", order.Items),
context.CallActivityAsync<bool>("RunFraudDetectionCheck", order.OrderDetails)
};
// Gather: Wait for all validation tasks to complete
var results = await Task.WhenAll(validationTasks);
// Process results: Check if all validations passed
if (results.All(r => r))
{
await context.CallActivityAsync("ApproveOrder", order);
return "Order approved";
}
else
{
await context.CallActivityAsync("RejectOrder", order);
return "Order rejected due to failed validation";
}
}
Activity Functions
[FunctionName("ValidateCustomerCredit")]
public static async Task<bool> ValidateCustomerCredit([ActivityTrigger] string customerId, ILogger log)
{
var creditService = new CreditCheckService();
var result = await creditService.ValidateAsync(customerId);
log.LogInformation($"Credit validation for {customerId}: {result}");
return result;
}
[FunctionName("CheckInventoryAcrossWarehouses")]
public static async Task<bool> CheckInventoryAcrossWarehouses([ActivityTrigger] List<OrderItem> items, ILogger log)
{
var inventoryService = new InventoryService();
var result = await inventoryService.CheckAvailabilityAcrossWarehousesAsync(items);
log.LogInformation($"Inventory check completed: {result}");
return result;
}
[FunctionName("RunFraudDetectionCheck")]
public static async Task<bool> RunFraudDetectionCheck([ActivityTrigger] OrderDetails orderDetails, ILogger log)
{
var fraudService = new FraudDetectionService();
var result = await fraudService.AnalyzeAsync(orderDetails);
log.LogInformation($"Fraud detection completed: {result}");
return result;
}
Pattern 3: Async HTTP APIs
The Async HTTP APIs pattern initiates a long-running process with an HTTP request and provides a status URL for the client to check progress, preventing HTTP timeouts. This is ideal for jobs that may take several minutes or more and allows clients to perform other work while waiting for results.
Use Case: Large Order Submission
A client submits a large order that requires several minutes of processing. Instead of the client waiting, the service immediately accepts the request and provides a status URL.
Workflow Steps:
- Client Request: The client sends an HTTP POST request to submit the large order
- API Response: The function starts an orchestration and returns an HTTP 202 (Accepted) response with a status URL
- Client Polling: The client uses the status URL to periodically check if the order processing is complete
- Completion: When the process is done, the status URL returns the final result, such as a confirmation number
Implementation
Client Function (HTTP Endpoint)
[FunctionName("SubmitLargeOrder")]
public static async Task<HttpResponseMessage> HttpStart(
[HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestMessage req,
[DurableClient] IDurableOrchestrationClient starter)
{
// The request body contains the large order data
string requestBody = await req.Content.ReadAsStringAsync();
string instanceId = await starter.StartNewAsync("LargeOrderProcessingOrchestrator", null, requestBody);
// Return the response with status check URL
return starter.CreateCheckStatusResponse(req, instanceId);
}
Orchestrator Function
[FunctionName("LargeOrderProcessingOrchestrator")]
public static async Task<string> ProcessLargeOrder([OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<LargeOrder>();
// Process each line item
await context.CallActivityAsync("ValidateLargeOrder", order);
await context.CallActivityAsync("ProcessPaymentBatch", order);
await context.CallActivityAsync("AllocateInventory", order);
var confirmationNumber = await context.CallActivityAsync<string>("GenerateConfirmation", order);
return confirmationNumber;
}
Response Example:
{
"id": "d1e7c72e8b9a4c5f9e3d4a2b1c0f8e7d",
"statusQueryGetUri": "https://yourapp.azurewebsites.net/runtime/webhooks/durabletask/instances/d1e7c72e8b9a4c5f9e3d4a2b1c0f8e7d?code=...",
"sendEventPostUri": "https://yourapp.azurewebsites.net/runtime/webhooks/durabletask/instances/d1e7c72e8b9a4c5f9e3d4a2b1c0f8e7d/raiseEvent/{eventName}?code=...",
"terminatePostUri": "https://yourapp.azurewebsites.net/runtime/webhooks/durabletask/instances/d1e7c72e8b9a4c5f9e3d4a2b1c0f8e7d/terminate?reason={text}&code=..."
}
Pattern 4: Human Interaction (Approval Workflows)
The Human Interaction pattern allows a workflow to pause and wait for external events, such as a human approval or a system callback. This is essential for approval workflows and other processes that require a manual step.
Use Case: Manual Order Approval
Orders over a certain amount require manual approval from a manager. The workflow pauses and waits for the manager’s decision before proceeding.
Workflow Steps:
- Check Order Value: If the order value is low, it’s auto-approved
- Request Approval: If the value is high, an email is sent to a manager, and the workflow waits for a response
- Handle Approval: The workflow resumes when the manager approves or rejects the order. A timeout is also set to handle cases where no response is received
Implementation
Orchestrator Function
[FunctionName("ManualOrderApprovalOrchestrator")]
public static async Task<string> ProcessOrderApproval(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<Order>();
// Auto-approve small orders
if (order.Amount < 500)
{
await context.CallActivityAsync("ProcessOrder", order);
return "Auto-approved";
}
// Send for manager approval
await context.CallActivityAsync("SendApprovalRequest", order);
// Wait for approval or timeout (72 hours)
var approvalTask = context.WaitForExternalEvent<bool>("OrderApprovalResponse");
var timeoutTask = context.CreateTimer(context.CurrentUtcDateTime.AddHours(72), CancellationToken.None);
var completedTask = await Task.WhenAny(approvalTask, timeoutTask);
if (completedTask == approvalTask && approvalTask.Result)
{
await context.CallActivityAsync("ProcessOrder", order);
return "Approved";
}
else
{
await context.CallActivityAsync("RejectOrder", order);
return "Rejected or Timed Out";
}
}
Activity Functions
[FunctionName("SendApprovalRequest")]
public static async Task SendApprovalRequest(
[ActivityTrigger] Order order,
ILogger log)
{
var emailService = new EmailService();
await emailService.SendApprovalEmailAsync(order.ManagerEmail, order);
log.LogInformation($"Approval request sent for order {order.OrderId}");
}
[FunctionName("ProcessOrder")]
public static async Task ProcessOrder(
[ActivityTrigger] Order order,
ILogger log)
{
var orderService = new OrderService();
await orderService.ProcessAsync(order);
log.LogInformation($"Order {order.OrderId} processed successfully");
}
Approval Endpoint
[FunctionName("SubmitApproval")]
public static async Task<IActionResult> SubmitApproval(
[HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
[DurableClient] IDurableOrchestrationClient client)
{
string instanceId = req.Query["instanceId"];
bool approved = bool.Parse(req.Query["approved"]);
await client.RaiseEventAsync(instanceId, "OrderApprovalResponse", approved);
return new OkObjectResult($"Approval {(approved ? "granted" : "denied")} for instance {instanceId}");
}
Pattern 5: Saga Pattern for Distributed Transactions
The Saga pattern maintains data consistency across multiple, independent services without a distributed transaction. It orchestrates a series of local transactions, and if a step fails, it executes compensating actions in reverse order to undo the completed transactions.
This pattern is critical for maintaining data consistency in microservices architectures where distributed transactions aren’t feasible. It ensures that either all operations complete successfully or all completed operations are properly compensated.
Use Case: Multi-Service Order Fulfillment
A complex order requires local transactions across different services: reserving inventory, processing payment, and updating the customer’s loyalty points. If one fails, the entire order must be rolled back.
Workflow Steps:
- Reserve Inventory: The first step reserves the items in the warehouse system. If it fails, the process is aborted. If it succeeds, a compensating action (ReleaseInventory) is noted
- Process Payment: The next step processes the payment. If it fails, the ReleaseInventory action is triggered to undo the previous step
- Update Loyalty Points: The final step updates the customer’s loyalty points. If this fails, both the payment and inventory reservation steps are compensated for
Implementation
Orchestrator Function
[FunctionName("MultiServiceOrderOrchestrator")]
public static async Task<string> ProcessOrder(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<Order>();
var compensationActions = new List<string>();
try
{
// Step 1: Reserve Inventory
await context.CallActivityAsync("ReserveInventory", order.Items);
compensationActions.Add("ReleaseInventory");
// Step 2: Process Payment
await context.CallActivityAsync("ProcessPayment", order.PaymentDetails);
compensationActions.Add("RefundPayment");
// Step 3: Update Loyalty Points
await context.CallActivityAsync("UpdateLoyaltyPoints", order.CustomerId);
compensationActions.Add("RollbackLoyaltyPoints");
// All steps succeeded
return "Order completed successfully";
}
catch (Exception)
{
// Compensate in reverse order
compensationActions.Reverse();
foreach (var action in compensationActions)
{
await context.CallActivityAsync(action, order);
}
return "Order failed and rolled back.";
}
}
Activity and Compensation Functions
[FunctionName("ReserveInventory")]
public static async Task ReserveInventory([ActivityTrigger] List<OrderItem> items, ILogger log)
{
var inventoryService = new InventoryService();
await inventoryService.ReserveAsync(items);
log.LogInformation("Inventory reserved");
}
[FunctionName("ReleaseInventory")]
public static async Task ReleaseInventory([ActivityTrigger] Order order, ILogger log)
{
var inventoryService = new InventoryService();
await inventoryService.ReleaseAsync(order.Items);
log.LogInformation("Inventory released (compensation)");
}
[FunctionName("ProcessPayment")]
public static async Task ProcessPayment([ActivityTrigger] PaymentDetails paymentDetails, ILogger log)
{
var paymentService = new PaymentService();
await paymentService.ChargeAsync(paymentDetails);
log.LogInformation("Payment processed");
}
[FunctionName("RefundPayment")]
public static async Task RefundPayment([ActivityTrigger] Order order, ILogger log)
{
var paymentService = new PaymentService();
await paymentService.RefundAsync(order.PaymentDetails);
log.LogInformation("Payment refunded (compensation)");
}
[FunctionName("UpdateLoyaltyPoints")]
public static async Task UpdateLoyaltyPoints([ActivityTrigger] string customerId, ILogger log)
{
var loyaltyService = new LoyaltyService();
await loyaltyService.AddPointsAsync(customerId, 100);
log.LogInformation("Loyalty points updated");
}
[FunctionName("RollbackLoyaltyPoints")]
public static async Task RollbackLoyaltyPoints([ActivityTrigger] Order order, ILogger log)
{
var loyaltyService = new LoyaltyService();
await loyaltyService.RemovePointsAsync(order.CustomerId, 100);
log.LogInformation("Loyalty points rolled back (compensation)");
}
Pattern 6: Event-Driven Integration
The Event-Driven Integration pattern allows a durable orchestration to respond to external events, making it a perfect fit for event-driven architectures. The orchestration can pause until a specific event is received, then continue its workflow.
Event-driven patterns enable loosely coupled architectures where systems can respond to business events without tight dependencies. This pattern is particularly valuable in modern microservices architectures.
Use Case: Customer Onboarding
An order placed by a new customer triggers an onboarding process that must wait for a credit check result from an external system.
Workflow Steps:
- Start Initial Tasks: The workflow begins by creating a customer account and sending a welcome email
- Wait for Event: It then pauses and waits for an external event called CreditCheckResult
- Process Result: Once the event is received, it checks the result. If the credit check passed, it activates the customer’s premium features. If it failed, it notifies the customer of the rejection
Implementation
Orchestrator Function
[FunctionName("CustomerOnboardingOrchestrator")]
public static async Task<string> ProcessCustomerOnboarding(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var customer = context.GetInput<Customer>();
// Start initial tasks
await context.CallActivityAsync("CreateCustomerAccount", customer);
await context.CallActivityAsync("SendWelcomeEmail", customer);
// Wait for external event (CreditCheckResult)
var creditCheckResult = await context.WaitForExternalEvent<bool>("CreditCheckResult");
if (creditCheckResult)
{
await context.CallActivityAsync("ActivatePremiumFeatures", customer);
await context.CallActivityAsync("SendActivationEmail", customer);
return "Customer onboarded successfully";
}
else
{
await context.CallActivityAsync("RejectCustomer", customer);
return "Customer onboarding rejected due to failed credit check";
}
}
Activity Functions
[FunctionName("CreateCustomerAccount")]
public static async Task CreateCustomerAccount(
[ActivityTrigger] Customer customer,
ILogger log)
{
var customerService = new CustomerService();
await customerService.CreateAccountAsync(customer);
log.LogInformation($"Customer account created: {customer.CustomerId}");
}
[FunctionName("SendWelcomeEmail")]
public static async Task SendWelcomeEmail(
[ActivityTrigger] Customer customer,
ILogger log)
{
var emailService = new EmailService();
await emailService.SendWelcomeAsync(customer.Email);
log.LogInformation($"Welcome email sent to: {customer.Email}");
}
[FunctionName("ActivatePremiumFeatures")]
public static async Task ActivatePremiumFeatures(
[ActivityTrigger] Customer customer,
ILogger log)
{
var customerService = new CustomerService();
await customerService.ActivatePremiumAsync(customer.CustomerId);
log.LogInformation($"Premium features activated for: {customer.CustomerId}");
}
External Event Trigger (Credit Check Callback)
[FunctionName("CreditCheckCallback")]
public static async Task<IActionResult> ReceiveCreditCheckResult(
[HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
[DurableClient] IDurableOrchestrationClient client)
{
string instanceId = req.Query["instanceId"];
var creditCheckResult = await req.GetJsonAsync<CreditCheckResult>();
await client.RaiseEventAsync(instanceId, "CreditCheckResult", creditCheckResult.Passed);
return new OkObjectResult($"Credit check result received for instance {instanceId}");
}
Advanced Error Handling and Retry Strategies
One of the most powerful features of Durable Functions is the ability to implement sophisticated error handling and retry logic. Let’s explore this in detail.
Basic Retry Configuration
Durable Functions provide built-in retry policies through Retry Options. You can define retry intervals, attempt limits, and exponential backoff behaviour. This helps handle transient failures such as temporary network issues or API throttling, without writing manual retry loops.
[FunctionName("ResilientOrderOrchestrator")]
public static async Task<string> ProcessOrder(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<Order>();
// Configure retry policy for inventory validation
var retryOptions = new RetryOptions(
firstRetryInterval: TimeSpan.FromSeconds(5),
maxNumberOfAttempts: 3);
var inventoryResult = await context.CallActivityWithRetryAsync<bool>("ValidateInventory", retryOptions, order.ProductId);
if (!inventoryResult)
return "Order failed: Insufficient inventory";
// Process payment with different retry strategy
var paymentRetryOptions = new RetryOptions(
firstRetryInterval: TimeSpan.FromSeconds(5),
maxNumberOfAttempts: 5)
{
BackoffCoefficient = 2.0,
MaxRetryInterval = TimeSpan.FromMinutes(5)
};
var paymentResult = await context.CallActivityWithRetryAsync<PaymentResult>("ProcessPayment", paymentRetryOptions, order);
if (!paymentResult.Success)
return "Order failed: Payment declined";
await context.CallActivityAsync("ShipOrder", order);
return "Order processed successfully";
}
The example above shows:
- Each activity can have its own retry policy.
- The RetryOptions class allows fine-grained control (intervals, backoff coefficient, max attempts).
- Retries are handled automatically by the Durable Task Framework with no custom loops needed.
Custom Exception Handling in Activities
You can combine orchestrator-level retries with detailed exception handling inside activity functions. By catching specific exceptions, you decide which errors should trigger retries and which should not offer full control over resiliency strategies.
[FunctionName("ProcessPayment")]
public static async Task<PaymentResult> ProcessPayment([ActivityTrigger] PaymentRequest request, ILogger log)
{
try
{
var paymentGateway = new PaymentGatewayService();
return await paymentGateway.ProcessAsync(request);
}
catch (RateLimitException ex)
{
log.LogWarning($"Rate limit hit: {ex.Message}");
// Wait before retry
await Task.Delay(TimeSpan.FromMinutes(1));
throw; // This will trigger the retry with backoff
}
catch (PaymentDeclinedException ex)
{
log.LogInformation($"Payment declined: {ex.Message}");
// Don't retry declined payments
return new PaymentResult
{
Success = false,
Reason = ex.Message
};
}
catch (PaymentGatewayException ex)
{
log.LogError($"Payment gateway error: {ex.Message}");
throw; // Retry for transient gateway errors
}
}
The example above shows:
- Catch and classify exceptions to control retry behaviour.
- Throw exceptions for transient issues (to trigger retry); return handled results for permanent ones.
- Combine with CallActivityWithRetryAsync() for robust, layered fault tolerance.
Complex Business Logic Example
Durable Functions shine when implementing complex business workflows involving multiple conditional steps and external dependencies. The example below shows parallel data fetching, dynamic branching, and conditional discounts, all written in clean, testable code.
Here’s a real-world example showing how Durable Functions handle complex pricing logic that would be difficult in Logic Apps:
[FunctionName("ComplexPricingOrchestrator")]
public static async Task<PricingResult> CalculateOrderPrice(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<Order>();
// Get customer details
var customer = await context.CallActivityAsync<Customer>("GetCustomer", order.CustomerId);
// Get current season and regional data in parallel
var seasonTask = context.CallActivityAsync<Season>("GetCurrentSeason", null);
var regionTask = context.CallActivityAsync<Region>("GetRegion", customer.RegionId);
await Task.WhenAll(seasonTask, regionTask);
var season = seasonTask.Result;
var region = regionTask.Result;
// Calculate base pricing
var pricing = await context.CallActivityAsync<PricingResult>(
"CalculatePricing",
new PricingInput { Order = order, Customer = customer, Season = season, Region = region });
// Apply VIP discount if eligible
if (customer.IsVIP && order.Amount > 10000)
{
pricing = await context.CallActivityAsync<PricingResult>(
"ApplyVIPDiscount",
new DiscountInput { Pricing = pricing, LoyaltyTier = customer.LoyaltyTier });
}
// Apply bulk discount
if (order.Items.Sum(i => i.Quantity) > 100)
{
pricing = await context.CallActivityAsync<PricingResult>(
"ApplyBulkDiscount",
pricing);
}
// Apply seasonal promotion if active
if (season.HasPromotion)
{
pricing = await context.CallActivityAsync<PricingResult>(
"ApplySeasonalPromotion",
new PromotionInput { Pricing = pricing, Season = season });
}
return pricing;
}
- Demonstrates complex orchestration logic combining parallelism (Task.WhenAll) and branching.
- Each discount or rule is modulariszed as an activity improving maintainability and testability.
- This kind of dynamic, multi-criteria pricing is hard to express in Logic Apps, but natural in code.
9 Best Practices for Production
1. Keep Orchestrators Deterministic
Orchestrator functions must be deterministic. They should produce the same result given the same input.
Avoid in Orchestrators:
// DON'T DO THIS
public static async Task BadOrchestrator(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
// Non-deterministic operations
var random = new Random().Next(); //Don't use Random
var now = DateTime.Now; // Don't use DateTime.Now
var guid = Guid.NewGuid(); // Don't generate GUIDs
// I/O operations
await File.ReadAllTextAsync("file.txt"); // No direct I/O
await httpClient.GetAsync("https://api.example.com"); // No direct HTTP calls
}
Do Instead:
// DO THIS
public static async Task GoodOrchestrator(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
// Use context methods for deterministic behaviour
var currentTime = context.CurrentUtcDateTime; // set context time
var guid = context.NewGuid(); // Use context GUID
// Put all I/O in activity functions
var fileContent = await context.CallActivityAsync<string>("ReadFile", "file.txt");
var apiResult = await context.CallActivityAsync<string>("CallExternalApi", "https://api.example.com");
}
2. Design Activity Functions for Idempotency
Activity functions may be executed multiple times due to retries. Design them to be idempotent – safe to execute multiple times with the same input.
[FunctionName("UpdateInventoryIdempotent")]
public static async Task UpdateInventory([ActivityTrigger] InventoryUpdate update, ILogger log)
{
var inventoryService = new InventoryService();
// Check if this update was already applied
var existingUpdate = await inventoryService.GetUpdateByIdAsync(update.UpdateId);
if (existingUpdate != null)
{
log.LogInformation($"Update {update.UpdateId} already applied, skipping");
return;
}
// Apply the update and record it
await inventoryService.ApplyUpdateAsync(update);
log.LogInformation($"Inventory updated: {update.UpdateId}");
}
3. Use Unique Task Hubs for each Application
Separate different applications or environments using unique Task Hub names to prevent interference.
{
"extensions": {
"durableTask": {
"hubName": "OrderProcessing"
}
}
}
4. Implement Comprehensive Monitoring
Configure Application Insights for complete observability:
[FunctionName("MonitoredOrchestrator")]
public static async Task<string> ProcessOrder([OrchestrationTrigger] IDurableOrchestrationContext context, ILogger log)
{
var order = context.GetInput<Order>();
// Log at key points
log.LogInformation($"Starting order processing for {order.OrderId}");
try
{
await context.CallActivityAsync("ProcessPayment", order);
log.LogInformation($"Payment processed for order {order.OrderId}");
await context.CallActivityAsync("ShipOrder", order);
log.LogInformation($"Order {order.OrderId} shipped successfully");
return "Success";
}
catch (Exception ex)
{
log.LogError(ex, $"Order {order.OrderId} failed: {ex.Message}");
throw;
}
}
5. Handle Long-running Workflows with Checkpoints
For very long-running workflows, use sub-orchestrations to break down the process:
[FunctionName("LongRunningOrderOrchestrator")]
public static async Task<string> ProcessLongRunningOrder(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<Order>();
// Phase 1: Order Validation (sub-orchestration)
await context.CallSubOrchestratorAsync("OrderValidationOrchestrator", order);
// Phase 2: Payment Processing (sub-orchestration)
await context.CallSubOrchestratorAsync("PaymentProcessingOrchestrator", order);
// Phase 3: Fulfillment (sub-orchestration)
await context.CallSubOrchestratorAsync("FulfillmentOrchestrator", order);
return "Order completed through all phases";
}
6. Implement Proper Error Recovery
[FunctionName("RobustOrchestrator")]
public static async Task<string> ProcessWithRecovery(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var order = context.GetInput<Order>();
var maxAttempts = 3;
for (int attempt = 1; attempt <= maxAttempts; attempt++)
{
try
{
await context.CallActivityAsync("ProcessOrder", order);
return "Success";
}
catch (Exception ex) when (attempt < maxAttempts)
{
// Log and wait before retry
await context.CallActivityAsync("LogError",
new ErrorLog { OrderId = order.OrderId, Attempt = attempt, Error = ex.Message });
// Exponential backoff
var delay = TimeSpan.FromSeconds(Math.Pow(2, attempt));
await context.CreateTimer(context.CurrentUtcDateTime.Add(delay), CancellationToken.None);
}
}
// All attempts failed
await context.CallActivityAsync("NotifyFailure", order);
return "Failed after all retry attempts";
}
7. Configure Appropriate Hosting Plans
Choose the right hosting plan based on your requirements:
Consumption Plan:
- Best for: Unpredictable workloads, development/testing
- Limits: 5-minute default timeout (configurable to 10 minutes)
- Cost: Pay only for execution time
Premium Plan:
- Best for: Predictable workloads, VNET integration needed
- Benefits: No cold starts, unlimited execution time, more powerful instances
- Cost: Fixed hourly rate
Dedicated (App Service Plan):
- Best for: Steady workloads, existing App Service environments
- Benefits: Predictable costs, full control over scaling
- Cost: Fixed monthly rate
8. Monitoring and Observability
Azure provides comprehensive monitoring capabilities for Durable Functions through Azure Monitor and Application Insights, allowing you to gain end-to-end visibility into orchestration health, performance, and reliability.
Just like standard Azure Functions, all log entries, metrics, and exceptions from Durable Functions automatically flow into Application Insights when the SDK is configured. However, Durable Functions introduce additional telemetry dimensions because of their orchestration model. Every orchestration, activity, and event becomes its own tracked operation with a shared correlation ID.
How Durable Functions Appear in Application Insights:
- Orchestration as Request Telemetry: Each orchestration instance (triggered by [OrchestrationTrigger]) is recorded as a “request” in Application Insights.
- The operation_Id uniquely identifies the orchestration instance.
- The operation_ParentId links activity executions back to the orchestrator, allowing you to view a full dependency tree.
- Activities as Dependency Telemetry: Each call to CallActivityAsync or CallSubOrchestratorAsync appears as a dependency under the parent orchestration.
- Custom metrics logged via log.LogMetric(…) appear in the Custom Metrics blade or in KQL as customMetrics.
This structure makes it easy to trace a specific instance, such as an individual order, from orchestration start to completion.
Setting Up Application Insights
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddApplicationInsightsTelemetry();
// Add custom telemetry initializer
builder.Services.AddSingleton<ITelemetryInitializer, CustomTelemetryInitializer>();
}
}
Custom Metrics and Tracking
[FunctionName("OrderProcessingWithMetrics")]
public static async Task<string> ProcessOrder(
[OrchestrationTrigger] IDurableOrchestrationContext context,
ILogger log)
{
var order = context.GetInput<Order>();
var stopwatch = Stopwatch.StartNew();
try
{
await context.CallActivityAsync("ProcessPayment", order);
await context.CallActivityAsync("ShipOrder", order);
stopwatch.Stop();
// Track custom metric
log.LogMetric("OrderProcessingTime", stopwatch.ElapsedMilliseconds,
new Dictionary<string, object>
{
{ "OrderId", order.OrderId },
{ "OrderAmount", order.Amount },
{ "Success", true }
});
return "Success";
}
catch (Exception ex)
{
stopwatch.Stop();
log.LogMetric("OrderProcessingTime", stopwatch.ElapsedMilliseconds,
new Dictionary<string, object>
{
{ "OrderId", order.OrderId },
{ "Success", false },
{ "ErrorType", ex.GetType().Name }
});
throw;
}
}
Tracing Individual Orchestration Instances
Every orchestration instance has a unique Instance ID, which is automatically included in the telemetry context. You can filter by this ID to trace a single workflow across all functions:
traces
| where customDimensions["DurableInstanceId"] == "abc123"
Alternatively, if you’re using Durable Functions Monitor or Azure Portal → Functions → Monitor, you can search by instance ID to view orchestration status, history, and failure details without writing KQL.
You can visualise orchestrations using:
- Application Insights Transaction Search: Displays full call chains (orchestrator → activities → dependencies → logs).
- Azure Monitor Workbooks: Use a workbook to combine metrics such as orchestration duration, failure rate, and custom metrics like OrderProcessingTime. Workbooks are ideal for creating a central observability dashboard.
- Durable Functions Monitor (open source): A dedicated viewer that provides instance history, inputs, outputs, and replays directly from storage or App Insights. Reference: https://github.com/microsoft/DurableFunctionsMonitor
- Monitoring Dashboard (preview): Observe, manage, and debug your task hub or scheduler’s orchestrations using the Durable Task Scheduler dashboard. Reference: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-task-scheduler/durable-task-scheduler-dashboard?pivots=az-cli
Example KQL Queries:
Average Orchestration Duration
requests
| where cloud_RoleName == "your-function-app"
| where name == "ProcessOrder"
| summarize avg(duration) by bin(timestamp, 1h)
Failed Orchestrations by exception type
exceptions
| where cloud_RoleName == "your-function-app"
| summarize count() by type
Custom Metric tracking (OrderProcessingTime)
customMetrics
| where name == "OrderProcessingTime"
| summarize avg(value) by tostring(customDimensions["Success"]), bin(timestamp, 1h)
Built-in vs Custom Monitoring:
- Out of the box:
-
- Application Insights automatically collects request, dependency, trace, and exception telemetry.
-
- The Azure Portal Durable Functions Monitoring tab provides basic instance-level insights.
- Custom Workbooks or Dashboards:
-
- For production-grade systems (e.g., order tracking, SLAs, or tenant-level analytics), custom Azure Workbooks or Power BI dashboards are recommended.
-
- They let you correlate business-level metrics (e.g., Order ID, Amount, Duration) with operational telemetry for proactive monitoring and alerting.
9. Automatic History Purge for Cost Management
The orchestration history, while essential for debugging and replaying workflows, grows indefinitely and is the primary driver of storage costs and performance degradation in the dedicated Azure Storage account. Rather than relying on manual purging or custom timer functions, you should leverage the built-in Automatic History Purge feature.
This feature is configured in your host.json file and tells the Durable Functions extension to asynchronously and routinely clean up orchestration instance data that has reached a terminal state (e.g., Completed, Failed, or Terminated).
Implementation via host.json
Configure the maxHistoryEntryAge under the durableTask section to define the retention period. Set this value to the minimum duration necessary for auditing and debugging (e.g., 30 to 90 days).
The example below configures the extension to automatically purge all terminal orchestration history data older than 30 days:
{
"version": "2.0",
"extensions": {
"durableTask": {
"storageProvider": {
"maxHistoryEntryAge": "30.00:00:00"
}
}
}
}
- maxHistoryEntryAge: Specifies the retention time window in the D.HH:MM:SS format. A value of “30.00:00:00” ensures that the historical records are deleted 30 days after the orchestration completes.
Note: For more advanced retention policies, especially when using the new Durable Task Scheduler (which provides specific purge rules for Completed, Failed, and Canceled statuses – this is in preview currently), refer to the official documentation: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-task-scheduler/durable-task-scheduler-auto-purge?tabs=cli
Key Takeaways
When to use each Pattern:
- Function Chaining: Sequential steps where order matters and each step depends on the previous.
- Fan-Out/Fan-In: Parallel processing of independent tasks that need aggregation
- Async HTTP APIs: Long-running processes initiated via HTTP that would otherwise timeout
- Human Interaction: Workflows requiring manual approval or external intervention
- Saga Pattern: Distributed transactions across microservices with compensation logic
- Event-Driven: Loosely coupled systems responding to business events
Essential Best Practices Summary
- Keep orchestrators deterministic – No random numbers, DateTime.Now, or direct I/O.
- Design for idempotency – Activity functions should handle multiple executions safely.
- Implement proper error handling – Use retry policies and compensation logic.
- Monitor comprehensively – Use Application Insights for full observability.
- Use unique Task Hubs – Isolate applications and environments.
- Choose the right hosting plan – Balance cost, performance, and requirements.
- Test thoroughly – Unit test orchestrators and activities separately.
Wrapping Up
Azure Durable Functions unlock powerful orchestration capabilities that go far beyond what traditional serverless functions can do. The patterns we’ve explored – from basic function chaining to sophisticated saga implementations – represent proven solutions to real-world integration challenges that enterprises face daily.
The key advantages of Durable Functions for enterprise integration are:
- Code-based flexibility for complex, evolving business logic
- Built-in reliability with automatic checkpointing and replay
- Sophisticated error handling with granular retry policies
- Performance at scale without per-action costs
- Full testability using standard development tools
Combined with proper testing, monitoring, and deployment strategies as covered in this post, Durable Functions form the foundation for robust enterprise integration platforms.
Logic Apps remain invaluable for connector-based integrations and empowering business users. Together with Durable Functions, they provide a balanced toolkit for enterprise integration – use the right tool for the right job, and you’ll build solutions that are both scalable and maintainable.
Did you miss Part 1? Check it out to learn about the architecture, core components, and when to choose Durable Functions vs Logic Apps.
Additional Resources


















