// Copilot in Fabric
The Microsoft Copilot service is now in preview in Microsoft Fabric (as of March 2024). This post is a run down of the different experiences, what to expect of the services and how to work with them. Enable Copilot in Fabric If you are living outside the USA or France, you need to manually enable the Copilot service in the Fabric tenant. To do this, you go to the tenant settings of youor Fabric service annd find the “Copilot annd Azure OpenAI service (preview)” setting. [...]
// Fabric Constraints for the Lazy Coder
When working with data and building data models, I personally seldom use the constraints feature on a database. Call me lazy - but I think constraints are adding unnessesary complexity when building data models for reporting. Especially if you are working with the some of new platforms - like Microsoft Fabric, where you are using staleless compute, aka. data storage is seperated from the compute layer. I understand the need for contraints on other database systems like OLTP systems. [...]
// KQL Data live copy to OneLake
Use your KQL data everywhere in Fabric Microsoft has released the final piece of the current puzzle around the OneLake as a one-stop-shopping service for dat in Fabric. Until now we had only access to the KQL data in the KQL database. With this addition, we can now finally say that OneLake is the one place for your data in Fabric. How to set it up You can set this new featire up at two levels in your KQL database - either on database level or table level. [...]
// Event Hub to Eventstream
The Fabric servive Eventstream can read data from an Event Hub service to help collect data from IoT devices and other streaming services. But how do you configure the Event Hub and Eventstream to work together? I’ll try to help with that in this blogpost. Start and configure the Event Hub First of all we need an Event Hub in Azure. The process is quite easy. Choose to create a new service and search for the Event Hub service. [...]
// Kusto in Fabric, with Magic
With the release of Microsoft Fabric, we also got the ability to use the Kusto engine in the platform. Not only can we now use our Kusto database and all the great stuff from the KQL language - we can also store KQL querysets and use those on our reporting platform. The new magic All of the above abilities are great additions to the Fabric service. But whats even more exciting is the new magic in Jupyter Notebooks. [...]
// Learn Kusto - Week 7
This week is going to be very technical. I’ve also learned something new by writing this post and I hop eyou will too. The Azure Data Explorer team has released a set of new functionality to help with clustering on data using a Log Reduce approach. The “old” approach Before the release described below - the ADX service had a good handfull of features to help with anomaly detection and clustering on semi structured data. [...]
// T-SQL Tuesday 161
This months T-SQL Tuesday is number 161 and is hosted by Reitse Eskens (T, L, B) - you can read the original invitation here or by clicking the image below. The topic of this month is about our most fun T-SQL script. It can be either a procedure or statement that I’ve written. My fun T-SQL script A couple of years back I needed to find the latest used parameters from a reporting services - SSRS (yes, it is old! [...]
// Learn Kusto - Week 6
This week it is almost Easter and time for cozy and relaxing time with the family. Relaxing time is also a part of this post, as I will show you some of the different meta-data options you have in Kusto and what to do with them. The main meta data part in Kusto In Kusto and the services Azure Data Explorer and Synapse Data Explorer, there is one main part of the meta data queries - the . [...]
// Learn Kusto - Week 5
In todays post I’ll dive into the choice between the options when creating the clusters for Azure Data Explorer. The underlying architecture for the Kusto/ADX clusters is build on virtual machines (VM) with specific configrations. Dev/test vs production When creating the clusters from the Azure portal, you are presented with 3 options when choosing the compute specification. The compute specification is the method of setting up the clusters for the specific workload you are planning to put on the Kusto cluster. [...]
// Learn Kusto - Week 4
Kusto cannot stand alone When using Kusto and Azure Data Explorer the service cannot stand alone. It needs some data to give it value. This blogpost tries to give some inout on the two IoT services som Azure that seamlessly integrates with the Azure Data Explorer (there is more, but to narrow it down 😄). Azure IoT Hub vs Event Hub When working with Azure Data Explorer and loading data to the storage engine, you might have some streaming devices or services that should land in the engine. [...]