top of page
Coding Station

Logs, metrics, traces yeah, but packets & flows too?

So in the last few weeks we have been working in the background with trying to determine whether their statement "you can send anything" would fly.

We knew already that since our relationship began in mid 2021, that logs and metrics were easy pickings. Any log, any metric could be pushed or pulled and we immediately get timeseries data. A very simple use case we knew they already delivered.

In early 2022, introduced traces. Again, push or pull the traces into and they are indexed without the need to configure anything. Traces are ingested, stored, indexed and searchable within a few seconds, making visualisation immediate. The clever engineers even added a contextual linkage between the spans and logs to make it easy to pivot between the two when navigating. This was a nice add as some vendors today still do not have this feature today.

Around the same time, mid 2022, Gartner released the APM and Observability magic quadrants. Whilst Dynatrace and Datadog fight for the leadership, with New Relic hot in their trail, there were many newcomers like Vmware, and Honeycomb appearing for the first time.

So we started to check out the entire quadrant's technical capabilities, comparing one by one, get some inspiration on what we could also consider for The exercise actually told us that not all quadrant vendors had a logging or a tracing capability. ALL of the vendors had limitations when we reviewed log or trace data retention periods. (some only 15 days).

What surprised us even further was that the majority of the vendors did not yet support open telemetry, they lacked synthetic testing and nearly all only supported data from their own agents or push or pull of 3rd parties. This was making us seriously question how the hell they actually made the quadrant at all. (Also why wasn't even considered).

Back to and their statement of "you can send anything". We knew that all telemetry in the "APM and Observability" world was doable, easily too. What else is there to throw in?

Another world we knew very well is networking. In this world, packets are king and they "never lie" according to many. Netflows are the conversational network metrics that take place between those packets from site to site. (Packets and Netflow data actually belong in a separate "network performance and diagnostics" quadrant by Gartner).

We tried some basic tcpdump's from our workstations and servers, they all made it into with a single configuration - to open up the listener on the SaaS environment (60 seconds). Within 2-3 seconds, our search started to return results. Below, results from out tcpdump.

When then moved up a gear and configured VPC in AWS and a SPAN from some old lab equipment. The lab had very little traffic, but enough to test the scenario.

Although we were trying to prove whether "anything" could be handled by and to try something different, we suspected that even if we managed to send packets and flows, it didn't mean we could actually do anything with the data yet. Would it index, can we search or create a dashboard?

A simple protocol search for TCP or UDP returned results. An IP Address search returned results. An application search for http also returned results across packets and flows. The results gave us the timestamp, source, destination, protocol and length out of the box. Enough to know a conversation took place. Metrics were also available immediately, so we started to visualise in the dashboard area and test some alerts. Much easier than we expected.

So you really can send anything! Putting all of the telemetry together, we are convinced that offers a very unique and powerful solution to any organisation. If we look at the image below, an organisation would need three or four tools to cover everything.

For us, adding network telemetry to actually disrupts "observability" as we know it. They are the first and only vendor that ingests Netflow and Packets in the same solution. With a very simple pricing strategy based on ingest of data, it will mean that there are no hidden costs like agents, archiving or paying for extended retention, trace volumes or volume of metrics. For those with products from vendors with limited telemetry coverage, it would be very easy to use as a bridge into those products. would handle the heavy lifting very easily, metrics can very forwarded like any other data.

There are a few other keys differentiators that no other observability vendor offers today:

> All data is stored in a location of your choice > You control the retention of your data and align to compliance > All data is treated equally, benefit from the same features like machine learning / anomaly detection

> You can search, extract and forward any data to any location or vendor of your choice

> You OWN the first mile of all data and keep silo specific tools in place, just filter what you send > Ingest any data from any source, this covers existing tools or 3rd party

> Built-in SIEM > SOC II Compliant > Cloud Native

> ETL and Reverse ETL (Data pipeline)

> Accepts data from open and vendor agents

> Stored in an OPEN format

> Low maintenance, no indexes, no storage to manage

> Replay any data from any point in time

Reach out for more information on the above


bottom of page