Browse our archives by topic…
Big Data

Adopt A Product Mindset To Maximise Value From Microsoft Fabric
In this post I describe how adopting a product mindset will help you to extract maximum value from Microsoft Fabric.

Exploring Strategies Enabled By Microsoft Fabric
In this post I provide an overview of how to build situational awareness and to use that to understand the strategic opportunities that are enabled by Microsoft Fabric.

Developing a Data Mesh Inspired Vision Using Microsoft Fabric
Microsoft Fabric, influenced by Data Mesh, offers a solid choice for organizations seeking a data-driven strategy. This article will help you assess how to approach a Data Mesh inspired vision using Microsoft Fabric.

How Does Microsoft Fabric Measure Up To Data Mesh?
Data Mesh is the latest approach for delivering data-driven value at scale. Microsoft Fabric has been heavily influenced by Data Mesh, but there are gaps that you will need to address in areas such as data product marketplace, developing standards, master data management, and federated computational governance.

Microsoft Fabric Is A Socio-Technical Endeavour
Creating a successful organisation-wide data and analytics platform isn't just about architecture, schemas and semantic models. It's also about culture, organisational design and people. This blog explores the socio-technical nature of data and analytics and how this should influence your approach to adoption of Microsoft Fabric.

Copilot - Are You Ready to Unleash the Power of AI in Self Service Analytics?
In the ever-evolving landscape of data and analytics, the advent of AI-powered capabilities has opened up exciting possibilities for self service reporting. Tools like Copilot in Power BI and Microsoft Fabric offer users the ability to extract insights from data using natural language prompts. It's an enticing prospect, where anyone can explore, visualize, and analyze data without being constrained by pre-canned reports or relying on data engineering teams. However, as we start to embrace these new capabilities, it's essential to strike a balance between the potential benefits and potential pitfalls.

Microsoft Fabric: Announced
Microsoft Fabric extends the promise of Azure Synapse integration to all analytics workloads from the data engineer to the business knowledge worker. It brings together Power BI, Data Factory, and the Data Lake, on a new generation of the Synapse data infrastructure. Delivered as a unified SaaS offering, it aims to reduce cost and time to value, while enabling new "citizen data science" capabilities. Check out all the resources from the endjin team collated in this post.

What is OneLake?
Explore OneLake, Microsoft Fabric's core storage for data in Azure & other clouds. Discover its role in Fabric workloads, the OneDrive equivalent for data storage.

Intro to Microsoft Fabric
Microsoft Fabric unifies data & analytics, building on Azure Synapse Analytics for improved data-level interoperability. Explore its offerings & pros/cons.

Ask the right questions to get your data insights projects back on track
Learn about the thinking behind endjin's Power BI Maturity assessment by applying Wardley Doctrine, and asking more questions.

SQLbits 2023 - The Best Bits
This is a summary of the sessions I attended at SQLbits 2023 in Newport Wales, which is Europe's largest expert led data conference.

Data validation in Python: a look into Pandera and Great Expectations
Data validation is a vital step in any data-oriented workstream. This post investigates and compares two popular Python data validation packages - Pandera and Great Expectations

Customizing Lake Databases in Azure Synapse Analytics
Explore Custom Objects in Lake Databases for user-friendly column names, calculated columns, and pre-defined queries in Azure Synapse Analytics.

How to create a semantic model using Synapse Analytics Database Templates
Explore Azure Synapse Analytics Database Templates and learn to create semantic models in this 2nd blog of the series.

What is a Lake Database in Azure Synapse Analytics?
Explore Lake Databases in Azure Synapse Analytics: analyze Dataverse data, share Spark tables, and design models with Database Templates.

Insight Discovery (part 6) – How to define business requirements for a successful cloud data & analytics project
Many data projects fail to deliver the impact they should for a simple reason – they focus on the data. This series of posts explains a different way of thinking that will set up your data & analytics projects for success. Using an iterative, action-oriented, insight discovery process, it demonstrates tools and techniques that will help you to identify, define and prioritize requirements in your own projects so that they deliver maximum value. It also explores the synergy with modern cloud analytics platforms like Azure Synapse, explaining how the process and the architecture actively support each other for fast, impactful delivery.

What are Synapse Analytics Database Templates and why should you use them?
In this blog series we explore the newly released Azure Synapse Analytics Database Templates. We put them into action to understand how they can be leveraged as part of a modern data pipeline.

Insight Discovery (part 5) – Deliver insights incrementally with data pipelines
Many data projects fail to deliver the impact they should for a simple reason – they focus on the data. This series of posts explains a different way of thinking that will set up your data & analytics projects for success. Using an iterative, action-oriented, insight discovery process, it demonstrates tools and techniques that will help you to identify, define and prioritize requirements in your own projects so that they deliver maximum value. It also explores the synergy with modern cloud analytics platforms like Azure Synapse, explaining how the process and the architecture actively support each other for fast, impactful delivery.

Insight Discovery (part 4) – Data projects should have a backlog
Many data projects fail to deliver the impact they should for a simple reason – they focus on the data. This series of posts explains a different way of thinking that will set up your data & analytics projects for success. Using an iterative, action-oriented, insight discovery process, it demonstrates tools and techniques that will help you to identify, define and prioritize requirements in your own projects so that they deliver maximum value. It also explores the synergy with modern cloud analytics platforms like Azure Synapse, explaining how the process and the architecture actively support each other for fast, impactful delivery.

Insight Discovery (part 3) – Defining Actionable Insights
Many data projects fail to deliver the impact they should for a simple reason – they focus on the data. This series of posts explains a different way of thinking that will set up your data & analytics projects for success. Using an iterative, action-oriented, insight discovery process, it demonstrates tools and techniques that will help you to identify, define and prioritize requirements in your own projects so that they deliver maximum value. It also explores the synergy with modern cloud analytics platforms like Azure Synapse, explaining how the process and the architecture actively support each other for fast, impactful delivery.

Insight Discovery (part 2) – successful data projects start by forgetting about the data
Many data projects fail to deliver the impact they should for a simple reason – they focus on the data. This series of posts explains a different way of thinking that will set up your data & analytics projects for success. Using an iterative, action-oriented, insight discovery process, it demonstrates tools and techniques that will help you to identify, define and prioritize requirements in your own projects so that they deliver maximum value. It also explores the synergy with modern cloud analytics platforms like Azure Synapse, explaining how the process and the architecture actively support each other for fast, impactful delivery.

Insight Discovery (part 1) – why do data projects often fail?
Many data projects fail to deliver the impact they should for a simple reason – they focus on the data. This series of posts explains a different way of thinking that will set up your data & analytics projects for success. Using an iterative, action-oriented, insight discovery process, it demonstrates tools and techniques that will help you to identify, define and prioritize requirements in your own projects so that they deliver maximum value. It also explores the synergy with modern cloud analytics platforms like Azure Synapse, explaining how the process and the architecture actively support each other for fast, impactful delivery.

How to apply behaviour driven development to data and analytics projects
In this blog we demonstrate how the Gherkin specification can be adapted to enable BDD to be applied to data engineering use cases.

Sharing access to synchronized Shared Metadata Model objects in Azure Synapse Analytics
The "Shared Metadata Model" is a powerful feature within Synapse Analytics that synchronizes Spark database objects with SQL Serverless. This article describes how to give non-admin users access to these synchronized objects in a least-privileged manner.

What is the Shared Metadata Model in Azure Synapse Analytics, and why should I use it?
A lesser known feature of Azure Synapse is the "Shared Metadata Model". Synapse has the capability to automatically synchronize tables created via Synapse Spark with objects you can query via the usual SQL Serverless endpoint, without any additional configuration. This article brings attention to this capability, highlighting the benefits and tradeoffs vs rolling your own SQL Serverless VIEWs.

Excel, data loss, IEEE754, and precision
The world runs on Excel, and misuse has caused some infamous data loss incidents. This post explores what happens when identifiers fall foul of Excel's numeric precision rules.

SQLbits 2022 - The Best Bits
This is a summary of the sessions I attended at SQLbits 2022 in London, which is Europe's largest expert led data conference.

A visual approach to demand management and prioritisation
Spending more time planning then doing? Struggling to get stakeholders engaged in making tough decisions about prioritisation? This simple, light touch approach to visual prioritisation could help.

Testing Power BI Reports with the ExecuteQueries REST API
Explore DAX queries for scenario-based testing in Power BI reports to ensure data model validity, rule adherence, and security maintenance.

Why you should care about the new Power BI ExecuteQueries API
The new Power BI ExecuteQueries REST API presents a number of new opportunities for Power BI developers in terms of tooling, process and integrations. This post highlights some of the key advantages of this new capability.

Managing schemas in Azure Synapse SQL Serverless
Explore Azure Synapse's SQL Serverless for on-demand data lake queries, its benefits, and challenges in managing schemas and maintaining data sync.

Data is the new soil
Thinking of data as the new soil is useful in highlighting the key elements that enable a successful data and analytics initiative.

How to test Azure Synapse notebooks
Explore data with Azure Synapse's interactive Spark notebooks, integrated with Pipelines & monitoring tools. Learn how to add tests for business rule validation.

Do robots dream of counting sheep?
Some of my thoughts inspired whilst helping out on the farm over the weekend. What is the future of work given the increasing presence of machines in our day to day lives? In which situations can AI deliver greatest value? How can we ease the stress of digital transformation on people who are impacted by it?

How to safely reference a nullable activity output in Azure Synapse Pipelines and Azure Data Factory
Discover Azure Data Factory's null-safe operator for referencing activity outputs that may not always exist. Learn to use it effectively.

Learning from Covid-19
Summary of key themes from the Doing Data Together conference hosted virtually by The Scotsman newspaper and Edinburgh University in November 2020. The conference agenda was pivoted to focus on the use of data to help tackle the Covid-19 pandemic. It provided a fascinating insight into the lessons learned.

How Azure Synapse unifies your development experience
Modern analytics requires a multi-faceted approach, which can cause integration headaches. Azure Synapse's Swiss army knife approach can remove a lot of friction.

How do I know if my data solutions are accurate?
Data insights are useless, and even dangerous (as we've seen recently at Public Health England) if they can't be trusted. So, the need to validate business rules and security boundaries within a data solution is critical. This post argues that if you're doing anything serious with data, then you should be taking this seriously.

How to fix the "You need permission to access workspace..." error in Azure Synapse Analytics
Data Engineers/Developers want to get access to Azure Synapse Analytics as quickly as possible to start designing and creating their data solutions. Being denied access to Synapse Studio can be frustrating and slows matters down. This article will address the "You need permission to access workspace..." error, discuss what causes it, and describe how to fix it.

How to use the Azure CLI to manage access to Synapse Studio
Assign roles in Synapse Studio for Azure Synapse Analytics devs using Azure CLI. Accessible by Owners/Contributors of the resource.

The Public Health England Test and Trace Excel error could have been prevented by this one simple step
Despite the subsequent media reporting, the loss of 16,000 Covid-19 test results at Public Health England wasn't caused by Excel. This post argues that a lack of an appropriate risk and mitigation analysis left the process exposed to human error, which ultimately led to the loss of data and inaccurate reporting. It describes a simple process that could have been applied to prevent the error, and how it will help if you're worried about ensuring quality or reducing risk in your own business, technology or data programmes.

Does Azure Synapse Link redefine the meaning of full stack serverless?
Azure Synapse Link for Cosmos DB is a game-changing piece in the Synapse suite of services - extending the support for SQL on Demand to enable querying over the Cosmos DB Analytical Store. This post explores whether the term 'full stack serverless' should now be extended to cover No-ETL and pay-as-you-query analytics, alongside serverless application architectures.

How to use SQL Notebooks to access Azure Synapse SQL Pools & SQL on demand
Wishing Azure Synapse Analytics had support for SQL notebooks? Fear not, it's easy to take advantage rich interactive notebooks for SQL Pools and SQL on Demand.

ArrayPool vs MemoryPool—minimizing allocations in AIS.NET
Tracking down unexpected allocations in a high-performance .NET parsing library.

Deploy an Azure Synapse Analytics workspace using an ARM Template
Azure Synapse Analytics is Microsoft's new unified cloud analytics platform, which will surely be playing a big part in many organizations' technology stacks in the near future. For many organizations, Azure Resource Manager (ARM) templates are the infrastructure deployment method of choice. This blog explains how to deploy an Azure Synapse Analytics workspace using an ARM template.

Azure Synapse Analytics: How serverless is replacing the data warehouse
Serverless data architectures enable leaner data insights and operations. How do you reap the rewards while avoiding the potential pitfalls?

Talking about Azure Synapse on Microsoft Mechanics!
I was recently invited on to Microsoft Mechanics to talk about the new on-demand SQL Serverless offering within Azure Synapse. If you have been following along with my previous blog posts you will know that we've been hard at work applying Azure Synapse against real customer workloads. In the video I take you through the service by solving a real-world IoT problem for one of our telco customers.

Benchmarking Azure Synapse Analytics - SQL Serverless, using Polyglot Notebooks
There is a new service in town that promises to transform the way you query the contents of your data lake. Azure Synapse Analytics comes with a new offering called SQL Serverless allowing you to query your data on-demand with no need for pre-provisioned resources.When we heard about the new service we were keen to get involved, so for the last 10 months we've been working with the SQL Serverless product group to provide feedback on the service and to help ensure it meets our customers needs. During this time we've put it through it's paces by implementing a range of real-world use cases. We were particularly interested to see how it stacked up as a replacement for Data Lake Analytics, where to date there has been no clear and easy migration path.

Does Azure Synapse Analytics spell the end for Azure Databricks?
Have you or are you about to invest in Azure Databricks? If so, the new Spark offering in Azure Synapse Analytics is likely to have grabbed your attention and rightly so. Why is Microsoft putting yet another Spark offering on the table and what does it mean for you?

5 Reasons why Azure Synapse Analytics should be on your roadmap
Explore 5 key reasons to choose Azure Synapse Analytics for your cloud data needs, based on years of experience in driving customer outcomes.

Why Power BI developers should care about the new read/write XMLA endpoint
Whilst "read/write XMLA endpoint" might seem like a technical mouthful, its addition to Power BI is a significant milestone in the strategy of bringing Power BI and Analysis Services closer together. As well as closing the gap between IT-managed workloads and self-service BI, it presents a number of new opportunities for Power BI developers in terms of tooling, process and integrations. This post highlights some of the key advantages of this new capability and what they mean for the Power BI developer.

Testing Power BI Reports using SpecFlow and .NET
Ensure Power BI report quality by connecting to tabular models, executing scenario-based specs, and validating data, business rules, and security.

Recording of Azure Oxford talk on combatting illegal fishing with Azure (for less than £10/month)
Jess and Carmel recently gave a talk at Azure Oxford on Combatting illegal fishing with Machine Learning and Azure - for less than £10 / month. The recording of that talk is now available for viewing!The talk focuses on the recent work we completed with OceanMind. They run through how to construct a cloud-first architecture based on serverless and data analytics technologies and explore the important principles and challenges in designing this kind of solution. Finally, we see how the architecture we designed through this process not only provides all the benefits of the cloud (reliability, scalability, security), but because of the pay-as-you-go compute model, has a compute cost that we could barely believe!

Testing Power BI Dataflows using SpecFlow and the Common Data Model
Validating Power BI Dataflows is essential for reliable insights. Endjin employs automated quality gates in the development process, ensuring confidence in complex Power BI solutions.

Azure Analysis Services - how to save money with automatic shutdown
Azure Analysis Services offers a scalable analytical platform. Manage costs in multi-environment scenarios using automation with Powershell and Azure DevOps, as explained in this post.

Building a proximity detection pipeline
At endjin, our approach focuses on using scientific experimental method to support the creation of fully proved and tested decision making, and the use of scientific research to support our work. This post runs through how we applied that process to creation a pipeline to detect vessel proximity.This is an example which is based around the project we recently worked on with OceanMind. In this project we helped them to build a serverless architecture which could detect vessel proximity in close to real time. The vessel proximity events we detected were then fed into machine learning algorithms in order to detect illegal fishing! Carmel also runs through some of the actual calculations we used to detect proximity, how we used data projections to efficiently process large quantities of incoming data, and the use of Durable Functions to orchestrate the processing.

Azure Analysis Services: How to update the expression for a calculated column from .NET
Learn how to update Azure Analysis Services model schemas in custom .NET apps using AMO SDK. Develop rich end-user features for run-time, user-driven \"what if\" analysis.

Optimising C# for a serverless environment
In our recent project with OceanMind we used Azure Functions to process marine vessel telemetry from around the world. This involved processing huge quantities of data in close to real time. We optimised our processing for a serverless environment, the outcome of which being that the compute would cost less than £10 / month!This post summarises some of the techniques we used, including some concrete examples of optimisations we made.

Azure Analysis Services - How to process an asynchronous model refresh from .NET
Incorporate Azure Analysis Services in custom apps, going beyond read-only queries. This post explains using REST API in .NET apps for async model refreshes, ensuring efficient updates.

Introducing Ais.Net - High-Performance Parsing in C#
As part of our work with OceanMind, endjin wrote a high performance .NET AIS parser. AIS (Automatic Identification System) is how commercial ships report location information. This blog describes the parser, and the performance techniques it uses.

Azure Analysis Services: How to execute a DAX query from .NET
Explore endless possibilities with dynamic DAX queries in C# for Azure Analysis Services integration in custom apps using the provided code samples.

British Science Week - inspiring the next generation of data scientists
The theme of this year's British Science Week (6 - 15 March 2020) is "Our Diverse Planet". We'll be getting involved by speaking to school children about the work we've been doing with Oxfordshire-based OceanMind (part of the Microsoft AI for Good programme) to help them combat illegal fishing, hopefully inspiring some of the next generation of data scientists!

Azure Analysis Services - How to query all the measures in a model from .NET
Integrate Azure Analysis Services beyond data querying, using model metadata for dynamic UIs and APIs. This post details .NET querying methods for deeper app integration.

Azure Analysis Services: How to open a connection from .NET
Learn to integrate Azure Analysis Services in apps by establishing server connections. Follow this guide with code samples for essential scenarios.

Azure Analysis Services - integration options using .NET, REST APIs and PowerShell
Explore Azure Analysis Services in custom apps using SDKs, PowerShell cmdlets & REST APIs. Learn to choose the right framework in this guide.

Azure Analysis Services: 8 reasons why you might want to integrate into a custom application
We've done a lot of work at endjin with Azure Analysis Services over the last couple of years - but none of it has been what you'd call "traditional BI". We've pulled, twisted and bent it in all sorts of directions, using it's raw analytical processing power to underpin bespoke analysis products and processes. This post explains some of the common (and not-so-common) reasons why you might want to do similar things, and how Azure Analysis Services might be the key to unlocking your data insights.

AI for Good Hackathon
Towards the end of last year, Microsoft invited endjin along to a hackathon session they hosted at the IET in London as part of their AI for Good initiative. I've been thinking about the event and the broader work Microsoft is doing here a lot lately, because it gets to the heart of what I love about working in this industry: computers can magnify our power to do to good.

Building a secure data solution using Azure Data Lake Store (Gen2)
In this blog we discuss building a secure data solution using Azure Data Lake. Data Lake has many features which enable fine grained security and data separation. It is also built on Azure Storage which enables us to take advantage of all of those features and means that ADLS is still a cost effective storage option!This post runs through some of the great features of ADLS and runs through an example of how we build our solutions using this technology!
Speaking at NDC London: Combatting illegal fishing with Machine Learning and Azure
In January 2020, Carmel is speaking about creating high performance geospatial algorithms in C# which can detect suspicious vessel activity, which is used to help alert law enforcement to illegal fishing. The input data is fed from Azure Data Lake Storage Gen 2, and converted into data projections optimised for high-performance computation. This code is then hosted in Azure Functions for cheap, consumption based processing.

C#, Span and async
The addition of ref struct types, most notably Span<T>, opened C# to a range of high performance scenarios that were impractical to tackle with earlier versions of the language. However, they introduce some challenges. For example, they do not mix very well with async methods. This article shows some techniques for mitigating this.

Increasing performance via low memory allocation in C#
We worked on a project recently which required us to build a highly performant system for processing vast quantities of messages in real time. We had made the decision to run this processing using Azure Functions with C#. This post runs through some of the techniques we used for writing highly performant, low allocation code, including data streaming, list preallocation and the relatively new C# feature: Span<T>.

Import and export notebooks in Databricks
Sometimes it's necessary to import and export notebooks from a Databricks workspace. This might be because you have some generic notebooks that can be useful across numerous workspaces, or it could be that you're having to delete your current workspace for some reason and therefore need to transfer content over to a new workspace. Importing and exporting can be doing either manually or programmatically. In this blog, we outline a way to recursively export/import a directory and its files from/to a Databricks workspace.

Demystifying machine learning using neural networks
Machine learning often seems like a black box. This post walks through what's actually happening under the covers, in an attempt to de-mystify the process!Neural networks are built up of neurons. In a shallow neural network we have an input layer, a "hidden" layer of neurons, and an output layer. For deep learning, there is simply more hidden layers which allows for combining neuron's inputs and outputs to build up a more detailed picture.If you have an interest in Machine Learning and what is really happening, definitely give this a read (WARNING: Some algebra ahead...)!

Azure Databricks CLI "Error: JSONDecodeError: Expecting property name enclosed in double quotes:..."
Quite often it's beneficial to work with pre-built CLIs/SDKs to interact with your favourite tools, instead of making requests to the underlying REST API. Much of the complexity around constructing requests has been abstracted, and authentication is often easier. The Databricks CLI makes it easier to interact with your Databricks instance, but sometimes you can run into strange errors when constructing the values passed in as arguments. In this blog, we take a look at a JsonDecodeError that can occur when speaking to the Clusters CLI, and look at a way we can avoid this error.

Using Databricks Notebooks to run an ETL process
Here at endjin we've done a lot of work around data analysis and ETL. As part of this we have done some work with Databricks Notebooks on Microsoft Azure. Notebooks can be used for complex and powerful data analysis using Spark. Spark is a "unified analytics engine for big data and machine learning". It allows you to run data analysis workloads, and can be accessed via many APIs. This means that you can build up data processes and models using a language you feel comfortable with. They can also be run as an activity in a ADF pipeline, and combined with Mapping Data Flows to build up a complex ETL process which can be run via ADF.

Endjin is a Snowflake Partner
Snowflake is a cloud native data warehouse platform, that enabled data engineering, data science, data lakes, data sharing and data warehousing. Endjin are very excited to announce our partnership.

Exploring Azure Data Factory - Mapping Data Flows
Mapping Data Flows are a relatively new feature of ADF. They allow you to visually build up complex data transformation sequences. This can aid in the streamlining of data manipulation and ETL processes, without the need to write any code! This post gives a brief introduction to the technology, and what this could enable!

Snowflake Connector for Azure Data Factory - Part 2

Snowflake Connector for Azure Data Factory - Part 1

A conversation about .NET, The Cloud, Data & AI, teaching software engineers and joining endjin with Ian Griffiths
When he joined endjin, Technical Fellow Ian sat down with founder Howard for a Q&A session. This was originally published on LinkedIn in 5 parts, but is republished here, in full. Ian talks about his path into computing, some highlights of his career, the evolution of the .NET ecosystem, AI, and the software engineering life.

Cosmos DB - Request Units charged for processing a Gremlin API request
If you're using the Gremlin API for Cosmos DB, you can now see how much each operation costs in Request Units.

Overflowing with dataflow part 1: An overview
This is the first blog in a series about dataflow. The series focuses on TPL dataflow, but this post gives an overview of dataflow as a whole.The crucial thing to understand when using dataflow is that the data is in control. In most conventional programming languages, the programmer determines how and when the code will run. In dataflow, it is the data that drives how the program executes. The movement of data controls the flow of the program.

Using Python inside SQL Server
Do you have a bunch of data in SQL Server that you're using ODBC/JDBC to pull data to work with in Python? Using SQL Server's Python integration, you can connect to a SQL Server instance within your preferred IDE and perform the computations on the SQL Server Machine. No more clunky data transferring. Operationalizing a Python model/script is as easy as calling a stored procedure. Any application that can speak to SQL Server can invoke the Python code and retrieve the results. Easy! This blog will provide a few, simple examples which make use of this capability to carry out some simple Python commands, so you can get up and running as quickly as possible.

Snap Back to Reality – Month 2 & 3 of my Apprenticeship
Learn what types of things an apprentice gets up to at endjin a few months after joining. You could be learning about Neural Networks: algorithms which mimic the way biological systems process information. You could be attending Microsoft's Future Decoded conference, learning about Bots, CosmosDB, IoT and much more. Hopefully, you wouldn't be in hospital after a ruptured appendix!

How to plan your cloud transformation journey
We've been helping customers adopt Microsoft Azure since 2010, we have produced a lot of thought leadership to help people think about the steps required, the risk involved and how to plan a successful adoption.

Creating a PowerBI report with DirectQuery and multiple SQL Database sources using Elastic Query
Sometimes you want to build a Power BI dashboard that pulls in data from two different data sources. In this blog post Alice Waddicor demonstrates how you can use DirectQuery and multiple databased via ElasticQuery.

AWS vs Azure vs Google Cloud Platform - Storage & Content Delivery

Year 2 as a software engineering apprentice at endjin
Alice reflects on year 2, being given more responsibility, diving deeper into all aspects of software delivery, and the good habits she's been building.

Machine Learning - the process is the science
What do machine learning and data science actually mean? This post digs into the detail behind the endjin approach to structured experimentation, arguing that the "science" is really all about following the process, allowing you to iterate to insights quickly when there are no guarantees of success.

Embracing Disruption - Financial Services and the Microsoft Cloud
We have produced an insightful booklet called "Embracing Disruption - Financial Services and the Microsoft Cloud" which examines the challenges and opportunities for the Financial Service Industry in the UK, through the lens of Microsoft Azure, Security, Privacy & Data Sovereignty, Data Ingestion, Transformation & Enrichment, Big Compute, Big Data, Insights & Visualisation, Infrastructure, Ops & Support, and the API Economy.

Machine Learning - mad science or a pragmatic process?
This post looks at what machine learning really is (and isn't), dispelling some of the myths and hype that have emerged as the interest in data science, predictive analytics and machine learning has grown. Without any hard guarantees of success, it argues that machine learning as a discipline is simply trial and error at scale – proving or disproving statistical scenarios through structured experimentation.