background image

What’s the benefit of Calibo if it does not support your existing technology? 

Much like Inspector Gadget, Calibo boasts an impressive array of tech, stashed away in its arsenal. The right tool for the job at the right time, ready to pop up at a moment’s notice.  🛠️

However, whilst we’re not sure we can match Inspector Gadgets’ enviable 13000 gizmos (I Googled it!), with the number of cloud-native integrations we support, we’d wager, that no amount of go-go-gadget-hands is going to be able to provision a scalable, digital product with a tech stack as flexibly as we can.  

Over 150 integrations, and you have the one we don’t support? 

That said, even with over 150 cloud-native integrations currently supported (and more being added), there may still be instances where prospects encounter a less-than-ideal situation we’ve seen before. 

Something like: “Our tech stack uses XYZ specialist software, but Calibo doesn’t have an integration for XYZ. So, we can’t use it, right?”. It doesn’t plug right in and play – so surely the party’s over before it even began. No way – I did my hair for this! 

Let’s say you have a slight cold. You (hopefully) aren’t going to run straight to an ENT specialist as your first port of call. You are likely going to your general doctor due to their breadth of knowledge. It’s quite likely that, most of the time, they’re going to be able to help you. 

However, when you need a more specific investigation – you escalate to a specialist. In this metaphor, Calibo Data Fabric serves as our GP, and dbt is our specialist.  

How Calibo works with tools that are not ‘supported’  

At the point you need deeper and more enriched data transformation capabilities – we might turn to a tool like dbt. Here’s the kicker: at the time of writing – dbt is not a supported integration on the Calibo platform.

But, if you’ve been paying attention, (and didn’t just ask ChatGPT to summarise this for you as a paragraph), we’re about to see how we can still work with ‘unsupported’ tools such as dbt that may be a part of our existing pipelines. 

Before we look at this specific example, there are two main things we need to consider. Before we try to find a way to force the integration ourselves, could it be that your tool is already in the pipeline for integration?  

Calibo schedules a release approximately once every three months, so there’s a chance that the development for the native integration of your tool is already underway and you’re just a short wait from seeing it on the Calibo platform.  

If not, it is possible that your chosen tool is already on our radar, and seeing public interest in delivering said integration could be the catalyst needed to expedite its development. (In both cases, the best way to find out is to get in touch via sales@calibo.com.) 
The example we’re going to work through today is specific to one tool. 

  • However, we’re trying to instill the idea of resourcefulness. Calibo already offers an incredible number of integration options, and the quadruple D’s (define, design, develop, deploy) to which Calibo adheres present rigid guardrails to expedite development times while enforcing governance.  
  • Nevertheless, we don’t want to miss out on any of that value if one of the links in our chain doesn’t fit without a little encouragement. In which case, we need to consider that we’re working with software here and for the time and money we can save by switching to Calibo, the cost of a little creativity sounds increasingly appealing.  
  • The key enablers in Calibo are its features and integrations that support ‘extensibility.’ (In software development, extensibility refers to the capability to add new functionality or capabilities).  
  • Throughout my years as a consultant, I’ve discovered that extensibility is most effectively utilized around inputs and outputs. Consider how you might stir your coffee when you’re out of spoons; whether you use a knife, or a fork is irrelevant. What matters is finding a solution so you can move on to the more important tasks. 

[Example] How to integrate your tool, step-by-step  

To set the scene, we’re going to imagine we’re a retail e-commerce site with inventory in multiple locations, which is currently managed manually from an Excel upload.  

I’m going to assume you’ve got some prior knowledge of Calibo, and the precursor to where we start is that: 

1. You’ve set up your Calibo instance, configured your cloud tools and technologies, and onboarded users onto the platform.  

2. The next step is to build our first data pipeline in the Data Fabric Studio (DFS). In our example, we’ll pick up a manually curated file from an S3 bucket and transfer it to Snowflake, where our dbt instance is configured to process it.  

As per the example below: 

3. Let’s schedule our run for an appropriate time; we want our data to be ready as folks arrive at the office. To do so, click the ellipses at the top right of the screenshot above.

4. From here, we’ll be able to leverage a brand-new feature in Calibo. The latest update allows you to create multiple data pipelines on the same product.  

5. We need to set up our initial pipeline to generate our source data from our origin file.  

6. Let’s click ‘New Pipeline’. Give the pipeline a new name and a worthy description, and go next. 

7. At this point, we’ve got data flowing through our pipeline into our data lake where we’ll process it with dbt, our ‘unsupported’ tool. The next logical step is to build our dbt models. 

8. So, we’ll build our dbt model. We have a basic Kimball model that creates a fact and some dimensions and uses them to return a stock analysis data mart that we’re going to use to run some analysis for our e-commerce platform.  

Using dbt Cloud, we can build and schedule our models. 

9. Next, we set up our new environment, for our job to be run in – let’s call it PROD. 

10. Set up a job to run in our environment. We can edit our job by going to ‘Triggers’.  

11. There, we could use ‘specific hours’ to set the timing of the runs if we want them to run on an exact hourly interval. I know my Calibo run will be done by 5:30 AM. I want my dbt run to pick up the data when it’s ready and only run during the week.  

12. We can set up a CRON schedule that runs at 5:30 AM every day, Monday through Friday. 

13. If you’re using a containerised command-line instance of dbt, you can even go a step further and have dbt trigger automatically, when there’s data ready by using REST API from the Calibo interface to instantiate your environment and call dbt run. 

14. For the purposes of this example, we’re going to run our pipeline on both Calibo and dbt on a schedule. For integrity purposes, we can also configure Calibo and dbt to notify us if an operation should fail.  

15. So, we’ve got Calibo preparing our data, and dbt transforming it. All that’s left is to process the output in Calibo via our Jupyter machine-learning model, et voila!

An end-to-end data pipeline is orchestrated via Calibo using powerful technology that Calibo does not natively support.  

There we go.  

One of the core issues that Calibo addresses is the cacophony of tech integrations that exist in the modern development stack, to become the glue that binds them together.  

Sometimes on the surface, Calibo doesn’t have all the solutions right out-of-the-box, but that doesn’t have to be a showstopper.  

We just need to apply a little bit of creative problem-solving to the issue! 

Check out our factsheets for more info on what our platform can do.

Background racecar

More from Calibo

Platform

One platform across the entire digital value creation lifecycle.

Explore more
About us

We accelerate digital value creation. Get to know us.

Learn more
Resources

Find valuable insights in Calibo's resources library

Explore more
LinkedIn

Check out our profile and join us on LinkedIn

Go there
close