Skip to content
  1. We help small teams achieve big things.
  2. Our Technical Fellow just became a Microsoft MVP!
  3. We run the Azure Weekly newsletter.
  4. We just won 2 awards.
  5. We just published Programming C# 8.0 book.
  6. We run the Power BI Weekly newsletter.
  7. Our NDC London 2020 talk is now available online!
  8. We are school STEM ambassadors.

Endjin's expertise in high performance data processing and compute has helped many of our customers solve complex performance issues that were preventing success.

Whether you're developing a new application, or struggling with performance bottlenecks in an existing system, our Performance Investigation process can help you understand where optimisations can be made according to your non-functional requirements.

The investigation takes the form of a structured scientific experiment, as described below.

1. Benchmarking

The first step in the process is to benchmark the current performance characteristics of the system. This means running a representative workload through the system in and collecting the appropriate metrics in the different parts of the process, for example time taken / CPU usage / memory usage.

This data is usually, but not necessarily, collected using Azure Application Insights.

Example: Running a parallelized operation in a Azure Durable Function.

2. Investigation

We then take the collected data, examine the results, and represent it graphically using the most appropriate visualisation techniques. The data is collated and presented in as many ways as possible in order to gain the most insight.

Example: Processing is taking longer than anticipated, with unexplained gaps seen in the processing in Application Insights.

2. Hypothesis

From the results of the investigation we form a hypothesis for what could be causing any unexplained behaviour.

Example: The long gaps in the processing are caused by .NET thread pool thread starvation.

3. Experiment

We then run a timeboxed experiment to prove/disprove the hypothesis.

Example: Increase the thread count manually and re-run the process.

4. Conclusion

We collect the results from the experiment and draw conclusions about the hypothesis based on those results.

Example: After looking at Application Insights, we find that the unexplained gaps in processing are still present after re-running the processing. We conclude that the hypothesis was incorrect.

5. Repeat

We repeat steps 2-4 until a hypothesis is proved correct which explains the observed behaviour.

6. Write-up

Once the investigation is complete, we'll provide a full write up of the findings, including an executive summary and detailed lab notes of the experiments and conclusions.

The write-up provides the evidence for any decisions made as a result of the investigation. These decisions are then documented via architecture decision records (ADRs).