SQL Streambuilder Information Transformations – Cloudera Weblog

SQL Streambuilder Information Transformations – Cloudera Weblog

SQL Stream Builder (SSB) is a flexible platform for knowledge analytics utilizing SQL as part of Cloudera Streaming Analytics, constructed on high of Apache Flink. It allows customers to simply write, run, and handle real-time steady SQL queries on stream knowledge and a easy person expertise. 

Although SQL is a mature and properly understood language for querying knowledge, it’s inherently a typed language. There’s a sure degree of consistency anticipated in order that SQL might be leveraged successfully. As a vital a part of ETL, as knowledge is being consolidated, we are going to discover that knowledge from totally different sources are structured in several codecs. It is likely to be required to reinforce, sanitize, and put together knowledge in order that knowledge is match for consumption by the SQL engine. Information transformations in SSB provides us the flexibility to do precisely that. 

What’s an information transformation?

Information transformation in SSB makes it attainable to mutate stream knowledge “on the wire” as it’s being consumed into a question engine. This transformation might be carried out on incoming data of a Kafka matter earlier than SSB sees the information.

Just a few use instances when transformations generally is a highly effective software:

  • If the information being collected has delicate fields that we select to not expose to SSB.
  • If the Kafka matter has CSV knowledge that we wish to add keys and kinds to it.
  • If the information is in legitimate JSON format, however has non Avro appropriate area names, has no uniform keys, and so forth.
  • If the messages are inconsistent.
  • If the schema you need doesn’t match the incoming Kafka matter.

Just like UDFs, knowledge transformations are by default written in JavaScript. The one requirement that we do have is that after the information transformation is accomplished, it must emit JSON. knowledge transformations might be outlined utilizing the Kafka Desk Wizard.

The use case

The data we are using right here is safety log knowledge, collected from honeypots: invalid authentication makes an attempt to honeypot machines which might be logged and printed to a Kafa knowledge supply.

Right here is an excerpt of the log entries in JSON that’s streamed to Kafka:

{"host":"honeypot-fra-1","@model":"1","message":"Sep 11 19:01:27 honeypot-fra-1 sshd[863]: Disconnected from invalid person person port 34762 [preauth]","@timestamp":"2022-09-11T19:01:28.158Z","path":"/var/log/auth.log"}

{"@timestamp":"2022-09-11T19:03:38.438Z","@model":"1","message":"Sep 11 19:03:38 honeypot-sgp-1 sshd[6605]: Invalid person taza from port 41844","path":"/var/log/auth.log","host":"honeypot-sgp-1"}

{"@timestamp":"2022-09-11T19:08:30.561Z","@model":"1","message":"Sep 11 19:08:29 honeypot-sgp-1 kernel: [83799422.549396] IPTables-Dropped: IN=eth0 OUT= MAC=fa:33:c0:85:d8:df:fe:00:00:00:01:01:08:00 SRC= DST= LEN=40 TOS=0x00 PREC=0x00 TTL=240 ID=59466 PROTO=TCP SPT=48895 DPT=3389 WINDOW=1024 RES=0x00 SYN URGP=0 ","path":"/var/log/iptables.log","host":"honeypot-sgp-1"}

You most likely discover a few non Avro appropriate area names within the knowledge, one among them being @timestamp, which comprises an ISO formatted timestamp of when the safety incident occurred. In case you ingest this log knowledge into SSB, for instance, by mechanically detecting the information’s schema by sampling messages on the Kafka stream, this area will probably be ignored earlier than it will get into SSB, although they’re within the uncooked knowledge. 

Additional, if we’ve elected to make use of “Kafka occasion timestamps” as SSB row occasions, the timestamp that SSB data would be the time it was injected into Kafka. This is likely to be OK for some instances. Nevertheless, we are going to most likely wish to base our question on when a safety incident truly occurred. 

We are going to resolve this downside in three steps:

  1. Write an information transformation that creates a brand new area with an Avro appropriate identify in every JSON entry. We populate the sector with the worth within the non Avro appropriate @timestamp area.
  2. We are going to change the schema of the information to incorporate the brand new area that we emitted in step 1.
  3. We are going to inform SSB to make use of this new area, that’s now a part of the schema because the occasion timestamp.

The information transformation

This knowledge transformation ought to occur earlier than the occasions are written into the SSB desk. You will discover “Information Transformation” as one of many tabs beneath the desk.

On the core of the information transformation there’s a “report” object that comprises the payload of the log knowledge. The information transformation is about up as a assemble beneath the desk.

We are going to wish to create a brand new area known as data_timestamp that’s processed from the @timestamp area. We are going to create a neighborhood scoped variable to entry the report’s payload dictionary. The timestamp area is parsed utilizing the JavaScript Date module and added to a brand new key on the payload. We will, at that time, sanitize the fields that aren’t Avro appropriate, and return it as a stringified JSON object.

var payload = JSON.parse(report.worth);

var output = payload;

output['data_timestamp'] = Date.parse(payload['@timestamp']);

delete output['@timestamp'];

delete output['@version'];


We will now add the brand new area data_timestamp into the schema in order that it is going to be uncovered to SQL queries. We might simply add the next fragment describing the brand new area and its time into the schema beneath the “Schema Definition” tab:


"identify"  : "data_timestamp",

"sort": "lengthy", 

"doc": "Injected from a customized knowledge transformation" 


The final step is to alter the Kafka row time to make use of the brand new row that we simply created. That perform might be discovered beneath the “Occasion Time” tab’s “Enter Timestamp Column.”

We will evaluate the DDL adjustments which might be going to be utilized to the schema itself on “Replace and Evaluation.”

To summarize:

  • A brand new large integer data_timestamp area is added.
  • The eventTimestamp is used because the row time, formatted from the  data_timestamp.


On this module, now we have taken a deeper have a look at SSB’s knowledge transformations. We checked out the right way to write an information transformation in JavaScript to extract a area from the payload and format it right into a timestamp that may be configured because the SSB row time.

Anyone can check out SSB utilizing the Stream Processing Community Edition (CSP-CE). The Neighborhood Version makes growing stream processors straightforward, as it may be carried out proper out of your desktop or every other improvement node. Analysts, knowledge scientists, and builders can now consider new options, develop SQL-based stream processors domestically utilizing SQL Stream Builder powered by Flink, and develop Kafka Shoppers/Producers and Kafka Join Connectors, all domestically earlier than transferring to manufacturing in CDP.

Take a look at the total recording of the Deploying Stateful Streaming Pipelines in Less Than 5 Minutes With CSP Community Edition.

Leave a Reply

Your email address will not be published. Required fields are marked *