You are writing an applicaiton where you are trying to analyze tweets in near-real time for a live TV broadcast. A tweet sent 5 hours ago serves you no purpose in this context. How would you do this?
We need to create an application capable of collecting the events and sending them to be ingested, or consumed. This is the the Producer-Consumer Pattern. One process creates an event and transmits it while the other receives the event and does something with the data.
Knative Eventing allows for the creation of first party event sources via SinkBinding. In the most simple terms, a SinkBinding is responsible for matching your producer to a consumer. In this context, your consumer is the "event sink".
The consumer can be any Kubernetes resources that employ a PodSpec. This can be a DaemonSet, Job, Stateful Set, Deployment of even a Knative Service. If you aren't new to Knative, you may better know this as a ContainerSource. ContainerSource's are coming back to Knative Eventing but the favorable solution is SinkBinding.
The ContainerSource YAML contained both the sink binding and the deployment definitions. While this was simpler, it also limited what Knative Eventing could use as a source. By decoupling the deployment definition from the sink definition, we open up the possibilities.
I have a tutorial here in GitHub. Here you will create a small Python application that will pull 25 tweets every 30 seconds and then send them to a Knative service that will simply log all the messages coming in.
You will need to create an application in Twitter to get the necessary API keys to do this tutorial. Everything is detailed in the README file in the GitHub repository. Give a try then come back and let's talk.
Alright, you tried it? Wasn't in cool? Now you may be asking yourself why this matters. After all, couldn't you just write these applications to utilize some kind of messaging bus like RabbitMQ or Kafka?
Sure, but we are talking about serverless here. We are talking about finding ways to simplify the developer experience and asking developers or operators to maintain additional tooling to make the application work moves us away from the goal of being serverless.
There definitely are cloud providers who offer fully managed versions of these tools but fully managed != serverless. You have taken the maintenance requirements out of the picture but now we have to bind sources and sinks.
Knative Eventing allows you to declaratively bind sources to their sinks. If you look at the code, you didn't have to hard code where your wanted to send the code or where you were receiving the code from. You also didn't have to import special libraries so that you could connect to your source or sink.
The code for your event source used the
K_SINK environment variable. This variable is used by Knative to know where to route the traffic. For my consumer, I created an endpoint that received all incoming traffic.
The SinkBinding YAML was all that I needed in order to tell Knative how to route my events. This shows how you can simplify binding event sources to their receivers.
This is a very simple example where events are sent straight for the source to the sink. In a future tutorial, we will show you the power of Brokers and Channels when you want to have a more robust routing for your events.