Skip to main content

Performance Tuning in Mule4 Applications

To achieve optimal performance from your Mule applications, you must evaluate both the applications themselves and the environment in which they run. Although Mule 4 is designed to tune itself, your applications might exhibit performance issues due to their initial construction or dependencies.

Similarly, for on-premises installations, you might need to tune the environment itself so that your Mule applications can take full advantage of it. Because many variables influence it, tuning the performance of your application requires some trial and error.

You can simplify performance tuning by using documented best practices and testing your applications in ideal test environments. The following recommendations come from the Development and Services Engineering teams and benchmarking efforts by MuleSoft Performance Engineering.


Optimizing the performance of your Mule apps requires the following actions:

  • Applying tuning recommendations at the application level
          See Performance Tuning Recommendations for details.
  • Considering tuning prerequisites and best practices before testing
        See Performance Test Validations for details.
  • Monitoring performance during testing to determine the resources required by the Mule runtime engine (Mule)
See Performance Monitoring for details.

 Application level :


Repeatable vs Non-repeatable Streaming

Understand which streaming strategy results in the best flow performance for your use case:
  • A repeatable stream (read payload more than once)
  • A non-repeatable stream (read payload only once)
See Repeatable vs Non-repeatable Streaming for details.

Back-Pressure and MaxConcurrency

Understand back-pressure and learn how to use the maxConcurrency parameter to tune the number of concurrent messages sent to your flow.

See Back-Pressure and MaxConcurrency for details.

Backend Server Response Time

Understand if the average latency and throughput of backend servers is limiting the scalability or performance of your application.

See Backend Server Response Time for details.

Caching

Understand when to use cache and which cache-strategy to follow, based on key aspects of your data. Mule offers some customizable mechanisms, such as Cache Scope and the HTTP Caching API Gateway policy, to enable cache according to your needs.


See Caching for details.

Pooling Profiles

Pooling components helps process simultaneous incoming requests. Understand how to add add pooling-profile to connectors when performance tests show that it is necessary.

See Pooling Profiles for details.

Domains

Using domains provides a central repository for all the shared resources, facilitating the class-loading process. Domains enhance performance when you deploy multiple services on the same on-premises instance of Mule runtime engine.

See Domains for details.

Logging

Understand why asynchronous logging performs better than synchronous logging.

See Logging for details.

Batch Processing

Mule can process messages in batches, but batch processing requires having enough available memory to process the threads in parallel, which means moving the records from persistent storage into RAM in a fixed-size block. Understand how to configure the batch block size property for your application.


See Batch Processing for details.

Application Design Best Practices


Following certain practices at the design phase helps you to achieve better performance for your Mule apps.

See Application Design for details.

Performance Test Validation Prerequisites

Before executing performance testing:
  • Confirm that your Mule app and its functions work as expected because a wrong flow can give false-positive data.

  • Establish performance test criteria by asking yourself the following questions:

    • What are the expected average and peak workloads?

    • What specific need does your use case address?:

      • Throughput, when handling a large volume of transactions is a high priority.

      • Response time or latency, if spikes in activity negatively affect user experience.

      • Concurrency, if it is necessary to support a large number of users connecting at the same time.

      • Managing large messages, when the application is transferring, caching, storing, or processing a payload bigger than 1 MB.

    • What is the minimum acceptable throughput?

    • What is the maximum acceptable response time?


Comments

Popular posts from this blog

Microservices design patterns

Microservices design pattern Next :  saga-design-pattern-microservices

Runtime Fabric (RTF)

MuleSoft's Anypoint Runtime Fabric (RTF) has many features that help with deployment and management of Mule applications: Deployment: RTF can deploy applications to any environment, including on-premises, in the cloud, or in a hybrid setup. It can also automatically deploy Mule runtimes into containers. Isolation: RTF can isolate applications by running a separate Mule runtime server for each application. Scaling: RTF can scale applications across multiple replicas. Fail-over: RTF can automatically fail over applications. Monitoring and logging: RTF has built-in monitoring and logging capabilities to help teams troubleshoot issues and gain insights into application performance. Containerization: RTF supports containerization, which allows applications to be packaged with their dependencies and run consistently across different environments. Integration: RTF can integrate with services like SaveMyLeads to automate data flow between applications. Management: RTF can be managed with A...

Integration Design Patterns

Understanding Integration Design Patterns: Integration design patterns serve as reusable templates for solving common integration problems encountered in software development. They encapsulate best practices and proven solutions, empowering developers to architect complex systems with confidence. These patterns abstract away the complexities of integration, promoting modularity, flexibility, and interoperability across components. Most Common Integration Design Patterns: Point-to-Point Integration: Point-to-Point Integration involves establishing direct connections between individual components. While simple to implement, this pattern can lead to tight coupling and scalability issues as the number of connections grows. Visualizing this pattern, imagine a network of interconnected nodes, each communicating directly with specific endpoints. Publish-Subscribe (Pub/Sub) Integration: Pub/Sub Integration decouples producers of data (publishers) from consumers (subscribers) through a central ...