Home Tech Aggregation and Windowing Operation in Spark Streaming

Aggregation and Windowing Operation in Spark Streaming

In these blogs, lets us see how we can do aggregations and windowing operations in Apache Spark Streaming Dstream.

Let us consider an example where we will be receiving data from twitter based on some keyword filters. You can consider “Trump” as a keyword for now. We will be receiving a continuous stream of data that has a “Trump keyword” in its twitter message or text.
Now we have to find out the location from where we are receiving the maximum number of tweets on filter condition i.e., we are aggregating the data analytics services and finding out the location.

We also want to know the counts in a time window, say for example let’s consider a time window of 30 seconds and a sliding window of 10 seconds. It means we should get the max count from the last 30 seconds every 10 seconds.

Format : HH:MM:SS
00:00:00 – 00:00:10 => 1st window
00:00:00 – 00:00:20 => 2nd window
00:00:00 – 00:00:30 => 3rd window
00:00:10 – 00:00:40 => 4th window
00:00:20 – 00:00:50 => 5th window
00:00:30 – 00:00:60 => 6th window

Now let us write code.

First, we are going to import all the required packages.

Now we are going to set the log level to “error” so that it will not generate so many logs and create a Spark streaming context.

Now let us create a configuration builder and set all our access keys and access token secrete keys to it. Let’s pass this Configuration builder object to OAuth and the create a twitter stream using it.

Note: Please don’t use the keys available in the screenshot. I have already deleted them. Please create your own keys by signing up for a twitter developer account.

Now let’s filter out to get only English language tweets.

Now let us filter out those tweets whose location is null and create a tuple of (location,1). Then let us do reduceByKeyAndWindow and provide window time and sliding time.

Then let us find out where we are getting the maximum tweets from the last 30 seconds time interval with every 10 seconds sliding by doing ordering on the aggregated data.

Now let us start streaming and make it to wait until streaming is terminated so that it will keep on streaming twitter data and process it.

Now let us see how many counts we have every 30 seconds with 10 seconds time interval.

Happy Learning!

divyeshaegis
Hi my name's Divyesh, I'm working on a Digital marketing executive from a town called Rajkot-India. I currently work at Aegis Softtech, an award-winning and offshore development company.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

5 Most Common Kitchen Remodel Myths

The kitchen is the heart of the house, pumping out family, friends, and food day in and day out. Suggest a major or minor...

Select the Best Image Format for Fast Website Loading Speed?

With so many image formats, it is sometimes difficult to decide by Website Design Company in Kolkata, which is best for each application. Each...

The Modern Sofa Set for Your Home Decorating Needs

Modern sofa sets can be identified by these qualities: Cleaner, more contemporary lines and more bold, geometric shapes. Urban modern aesthetic. A modern sofa...

3 Major Reasons for Men to try Laser Hair Removal

We have all the information that we require about laser hair removal treatment. We know the benefits of full-body laser hair removal be it...

The Fabuwood Cabinets approved the Choice of Customers

If you want to create a stylish and modern look in your bathroom, then it would be best if you buy Fabuwood cabinets. With...
rec