The audio is in Bangla, you can switch to HD video by toggling the youtube settings.
(I do not prepare the scripts beforehand, so please apologize my clumsiness) .
The audio is in Bangla, you can switch to HD video by toggling the youtube settings.
(I do not prepare the scripts beforehand, so please apologize my clumsiness) .
Let’s assume you have a desktop application built with Python. It could be a traditional GUI app built with PyQT/wxPython/Kivy or any other GUI framework. Or it could be a web server that serves a browser based HTML GUI for the user. Either way, you have “frozen” the app using cx_freeze, py2app/py2exe or pyinstaller and now you want to add “auto update” to the app, so when there’s a new version of the application is available, the app can download and install the update, automatically. For this particular task, I found esky to be a good viable option. In this article, I am going to demonstrate how we can use esky to deliver updates to our apps.
If we want to use Esky to deliver updates, we need to freeze the app first. But this time, we will ask Esky to freeze the app for us, using our freezer of choice. For example, if we used py2app before, we will still use py2app but instead of directly using it, we will pass it to Esky and Esky will use the py2app to freeze the app for us. This step is necessary so that Esky can inject the necessary parts to handle updates/patches and install them gracefully.
For the apps to locate and download the updates, we need to serve the updates from a location on the internet/local network. Esky produces a zip archive. We can directly put it on our webserver. The apps we freeze needs to know the URL of the webserver and must have access to it.
On the other hand, inside our app, we need to write some codes which will scan the URL of the above mentioned webserver, find any newer updates and install them. Esky provides nice APIs to do these.
So now that we know the steps to follow, let’s start.
setup
file If you have frozen an app before, you probably know what a setup file is and how to write one. Here’s a sample that uses py2app to freeze an app:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
import sys from esky import bdist_esky from distutils.core import setup PY2APP_OPTIONS = {"includes": ['ssl', 'sip', 'PyQt4']} DATA_FILES = ['my_pyqt_ui_file.ui'] # Using py2app setup( name="My Awesome App", version="0.1", scripts=["main.py"], data_files=DATA_FILES, options={"bdist_esky": { "freezer_module": "py2app", "freezer_options": PY2APP_OPTIONS, }} ) |
Now we can generate the frozen app using:
1 |
python setup.py bdist_esky |
This should generate a zip archive in the dist
directory.
Collect the zip file from the dist
directory and put it somewhere accessible on the internet. For local testing, you can probably use Python’s built in HTTP server to distribute it.
Now we will see the client side code that we need to write to locate and install the updates.
Here’s some codes taken from a PyQT app. The find_esky_update
method is part of a QMainWindow
class. It is called inside the onQApplicationStarted
method. So it checks the update as soon as the application starts.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
def find_esky_update(self): if getattr(sys, "frozen", False): updater = esky.Esky(sys.executable, "http://localhost:8000") if updater.find_update(): reply = QtGui. \ QMessageBox \ .question(self, 'Update', "New update available! Do you want to update?", QtGui.QMessageBox.Yes | QtGui.QMessageBox.No, QtGui.QMessageBox.No) if reply == QtGui.QMessageBox.Yes: updater.auto_update(self.handle_esky_status) else: print("No new updates found!") else: print ("App is not frozen!") |
We first check if the app is frozen. If it’s not, then there’s no way we can install updates. sys.frozen
will contain information about the app if it’s frozen. Otherwise it will not be available. So we first ensure that it is indeed a frozen app.
Then we create an Esky app instance by providing it the URL of our webserver (where the updates are available). We only pass the root URL (without the zip file name). The find_update()
method on the Esky app will find newer update and return some information if a new update is available. Otherwise it will be falsy.
If an update is available, we ask our user if s/he wants to update. Here we used QMessageBox
for that. If they agree, we call the auto_update
method with a callback. We will see the callback soon. The auto_update
downloads the update and installs it. The callback we pass – it gets called every time something happens during the process. It can be a good way to display download progress using this callback.
Let’s see our example code here:
1 2 3 4 5 6 7 8 9 10 |
def handle_esky_status(self, message): progress = None if message['status'] == 'downloading': progress = int((float(message['received']) / float(message['size'])) * 100) elif message['status'] == 'ready': progress = 100 if progress is not None: print(progress) #self.progressBar.setValue(progress) |
As you can see from the code, the callback gets a dictionary which has a key status
and if it is “downloading”, we also have the amount of data we have received so far and the total size. We can use this to calculate the progress and print it. We can also display a nice progress bar if we wish.
So basically, this is all we need to find and install updates.
We have learned to use Esky, we have seen how to add auto update to our app. Now it’s time to build a new update. That is easy, we go back to the setup.py
file we defined earlier. We had version="0.1",
inside the setup()
function. We need to bump it. So let’s make it 0.2
and build it. We will get a new zip file (the file contains the version if you notice carefully). Drop it on the webserver (the URL where we put our app). Run an older copy of the app (which includes the update checking codes described above). It should ask you for an update 🙂
Please note, you need to call the find_esky_update()
method for the prompt to trigger. As I mentioned above, I run it in onQApplicationStarted
method for PyQt. You need to find the appropriate place to call it from in your application.
You can find a nice tutorial with step by step instructions and code samples here: https://github.com/cloudmatrix/esky/tree/master/tutorial
Most business phone services come with call forwarding, caller ID, call waiting, inbound call routing, call recording, and more. But that’s not all, there are a lot more things a business phone offers, just click here to learn about business phones!!
Phone Number Scramble
Callers with a good understanding of how to get through to an unfamiliar number can “scramble” a number to make it difficult for an automated telephone service to connect with that number. The process is a bit more complex, but essentially, when an automated telephone system attempts to connect with the number, the user can simply press any key to try and guess the number being dialed, and if the caller’s guess is incorrect, then the system will disconnect.
Here’s a little more about how it works:
When the user tries to connect with an unfamiliar number, the automated phone system responds by sending a message through the phone system that says, “This number is being forwarded to a number already in use.” This message contains the number being dialed and a four-digit number that matches the number being forwarded.
So, if the user dials “202.10.1014” and the system thinks it’s “204.10.1014”, the system will send a message through the phone system to the system that the “10” is a phone number already being used by another subscriber, and the system will switch the call back to the previous dialed number (202.10.1014). As we saw earlier in the book, the system doesn’t really know how to tell if a person has the number. Because if the caller thinks the number is “202.10.1014”, then it has to be 202.10.1014, and they’re still on the phone. The number “202.10.1014” is already in use as “204.10.1014”, and the system just assumes it’s “202.10.1014” because it doesn’t know otherwise.
The most famous example of this type of error is that of the person that left a message on the president’s phone. The message was “Do not call me again.” This message was sent over the Air Force one-way system, so no one received it. A year later, there was a big protest over the administration’s decision to include this message in the record of that person’s phone call.
What’s most important here, is that “202.10.1014” is not known in advance to any person in the system. It could be “202.0.0.1” (for “White House”), or it could be “202.0.0.2” (for “National Security”). In this case, “202.10.1014” would be the correct message for Air Force One.
The Air Force One System
The first step in the Air Force One system is to identify the presidential aircraft. All of the major aircraft in the air force are required to be identified to the Air Force, which can then have it sent to the White House. The president receives the information via a secure channel between the Air Force and the White House, such as the Secure Call System (SCCS). The Air Force then contacts the proper aircraft. The aircraft then identifies the Air Force One aircraft, and the president and vice president each receives a new “ticket” which has a different identification number for each aircraft. The two numbers are used to identify each flight from the Air Force One computer system. A user called a “handler” is assigned the responsibility of making the identification. Air Force One is capable
In this post, we would like to see how we can limit user accesses to our Django views.
If you have worked with Django, you probably have used the login_required
decorator already. Adding the decorator to a view limits access only to the logged in users. If the user is not logged in, s/he is redirected to the default login page. Or we can pass a custom login url to the decorator for that purpose.
Let’s see an example:
1 2 3 4 5 |
from django.contrib.auth.decorators import login_required @login_required def secret_page(request): return render_to_response("secret_page.html") |
There’s another nice decorator – permission_required
which works in a similar fashion:
1 2 3 4 5 |
from django.contrib.auth.decorators import permission_required @permission_required('entity.can_delete', login_url='/loginpage/') def my_view(request): return render_to_response("entity/delete.html") |
Awesome but let’s learn how do they work internally.
We saw the magic of the login_required
and permission_required
decorators. But we’re the men of science and we don’t like to believe in magic. So let’s unravel the mystery of these useful decorators.
Here’s the code for the login_required
decorator:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
def login_required(function=None, redirect_field_name=REDIRECT_FIELD_NAME, login_url=None): """ Decorator for views that checks that the user is logged in, redirecting to the log-in page if necessary. """ actual_decorator = user_passes_test( lambda u: u.is_authenticated(), login_url=login_url, redirect_field_name=redirect_field_name ) if function: return actual_decorator(function) return actual_decorator |
By reading the code, we can see that the login_required
decorator uses another decorator – user_passes_test
which takes/uses a callable to determine whether the user should have access to this view. The callable must accept an user instance and return a boolean value. user_passes_test
returns a decorator which is applied to our view.
If we see the source of permission_required
, we would see something quite similar. It also uses the same user_passes_test
decorator.
Now that we know how to limit access to a view based on whether the logged in user passes a test, it’s quite simple for us to build our own decorators for various purposes. Let’s say we want to allow access only to those users who have verified their emails.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
from django.contrib.auth.decorators import user_passes_test from django.contrib.auth import REDIRECT_FIELD_NAME def check_email_verification(user): return EmailVerification.objects.all().filter(user=user, verified=True) def check_email(function=None, redirect_field_name=REDIRECT_FIELD_NAME, login_url=None): """ Decorator for views that checks that the user is logged in, redirecting to the log-in page if necessary. """ actual_decorator = user_passes_test( check_email_verification, login_url=login_url, redirect_field_name=redirect_field_name ) if function: return actual_decorator(function) return actual_decorator |
Now we can use the decorator to a view like:
1 2 3 4 |
@login_required @check_email(login_url="/redirect/login/?reason=verify_email") def verified_users_only(request): return render_to_response("awesome/offers.html") |
Users who have verified their email addresses will be able to access this view. And if they didn’t, they will be redirected to the login view. Using the reason
query string, we can display a nice message explaining what’s happening.
Please note, we have used two decorators on the same view. We can use multiple decorators like this to make sure the user passes all the tests we require them to.
If you work with Python regularly, you probably know about IPython already. IPython has web based notebooks, QT based GUI consoles and plain old simple Terminal based REPL which is simply fantastic. But that’s not all, we can also embed IPython in our applications too. And this can lead to a number of potential use cases.
A common use case could be to drop into a IPython shell for quick interactive debugging. This can come very handy during prototyping.
Let’s see an example:
1 2 3 4 5 |
import IPython name = "Masnun" IPython.embed() |
When we run this code, we will get a nice IPython REPL where we can try out things. In our case, we haven’t done much except defining a variable named name
. We can print it out.
1 2 |
In [1]: print(name) Masnun |
I use Iron.io workers/queues/caches at my day to day job. So I often need to check status of the workers or get the size of a queue or even queue a few workers. I also need to check a few records on Mongodb. An interactive prompt can be really helpful for these.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
import IPython from iron_worker import IronWorker from iron_mq import IronMQ from iron_worker import Task from pymongo import MongoClient worker = IronWorker(project_id="PROJECT_ID", token="TOKEN") mq = IronMQ(project_id="PROJECT_ID", token="TOKEN") mongo = MongoClient("MONGOURL") def launch_workers(name, number=1): for x in range(number): task = Task(code_name=name) worker.queue(task) print("Launched {} workers for {}".format(name, number)) def top_buyers(): customers_collection = mongo.shop.customers query = { "purchases": {"$gte": 100} } print(customers_collection.find(query).count()) IPython.embed() |
Now I can just do launch_workers("send_emails", 3)
to launch 3 worker instances for the “send_emails” worker. Or get the number of buyers with more than 100 purhcases with the top_buyers()
function.
When we embed IPython, it displays it’s common banner when starting.
1 2 3 4 5 6 7 8 9 |
Python 2.7.8 (default, Nov 15 2014, 03:09:43) Type "copyright", "credits" or "license" for more information. IPython 2.3.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. |
We can easily disable that. To do so, we need to pass empty string to the banner1
parameter to the embed
method.
1 |
IPython.embed(banner1="") |
Or we can further customize the 2nd banner or the exit message like this:
1 2 3 4 5 |
IPython.embed( banner1="Entering Debug Mode", banner2="Here's another helpful message", exit_msg="Exiting the debug mode!" ) |
There was a time back in 2014 and earlier when PyQT5 installation was not straightforward and needed manual compilation. When searching on Google, still those posts come up on top results. But nothing to worry about, things have changed – it’s now quite simple.
If you are not already using Homebrew, you should start using it. Once Homebrew is installed, let’s install PyQT5 with this single command:
1 |
brew install pyqt5 |
Let’s take a sample PyQT5 code as example and run it. For examples, I usually pick one up from the excellent PyQT tutorials on zetcode.com. Here’s one:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
#!/usr/bin/python3 # -*- coding: utf-8 -*- """ ZetCode PyQt5 tutorial In this example, we create a simple window in PyQt5. author: Jan Bodnar website: zetcode.com last edited: January 2015 """ import sys from PyQt5.QtWidgets import QApplication, QWidget if __name__ == '__main__': app = QApplication(sys.argv) w = QWidget() w.resize(250, 150) w.move(300, 300) w.setWindowTitle('Simple') w.show() sys.exit(app.exec_()) |
Run it using:
1 |
python3 filename.py |
If you get a nice looking small window – it worked!
I am a big fan of pyenv and use it for running different versions and flavours of Python. If you use pyenv too, chances are you have your own version of Python installed through it. However, the brew formula that installs PyQT5 depends on another formula python3
– homebrew’s own Python 3 installation. When we install PyQT5, this formula is used to install the bindings, so the bindings are only available to this particular Python 3 installation and unavailable to our pyenv versions.
We will discuss two potential solutions to this issue.
system
One simple work around is to use the Python 3 version installed by Homebrew. We can ask pyenv to switch to the system
version whenever we’re doing PyQT5 development.
1 2 |
pyenv global system python3 filename.py |
We can create an alias to quickly switch between Python versions. I have this in my .zshrc
:
1 2 3 |
alias py2="pyenv global 2.7.8" alias py3="pyenv global 3.5.0" alias pysys="pyenv global system" |
This way is very quick and simple but we miss the benefits of using pyenv.
Alternatively, we can add the site-packages
for this homebrew installed python 3 to our pyenv installation of python 3. Since both installations were built on the same machine and OS, the bindings should work correctly. We would be using .pth
files to do this.
Let’s first find out the site-packages for the homebrew installation:
1 |
brew info python3 |
We would notice a message like:
1 2 |
They will install into the site-package directory /usr/local/lib/python3.5/site-packages |
That is the site-packages
for this version.
Now let’s find our pyenv python3’s local site directory:
1 2 |
# Switch to pyenv python first python -m site --user-site |
Now create a homebrew.pth
file in that directory and put the previously found site packages path there.
Let’s create the file:
1 2 3 |
# In my local site directory for pyenv vim /Users/masnun/.local/lib/python3.5/site-packages/homebrew.pth |
And put these contents:
1 |
/usr/local/lib/python3.5/site-packages |
Save and exit. Now you should be able to just use:
1 |
python filename.py |
I assume you are already familiar with Docker and it’s use cases. If you haven’t yet started using Docker, I strongly recommend you do soon.
I have a Django application that I want to dockerize it for local development. I am also new to Docker, so everything I do in this post might not be suitable for your production environment. So please do check Docker best practices for production apps. This tutorial is meant to be a basic introduction to Docker. In this post, I am going to use Docker Machine and Docker Compose. You can get them by installing the awesome Docker Toolbox.
Before we start, we need to break down our requirements so we can individually build the required components. For my particular application, we need these:
We will build images for these separately so we can create individual containers and link them together to compose our ultimate application. We shall build our Django App server and use pre-built images for MySQL and Redis.
Before we begin, let’s talk Dockerfile
s. Dockerfiles are scripts to customize our docker builds. It allows us control and flexibility over how we build the images for our applications. We will use our custom Dockerfile to build the Django app server.
To build an image for a Django application we need to go through these following steps:
Here’s the Dockerfile we shall use:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
FROM phusion/baseimage MAINTAINER masnun ENV DEBIAN_FRONTEND noninteractive RUN apt-get update RUN apt-get install -y python python-pip python-dev RUN apt-get install -y libxml2-dev libxslt-dev libffi-dev libssl-dev RUN apt-get install -y libmysqlclient-dev ADD requirements.txt /app/src/requirements.txt WORKDIR /app/src RUN pip install -r requirements.txt WORKDIR /app/src/lisp CMD [ "python", "manage.py", "runall"] EXPOSE 8000 |
So what are we doing here:
phusion/baseimage
as our base image. It’s a barebone image based on Ubuntu. Ubuntu by default comes with many packages which we don’t need to run inside docker. This base image gets rid of those and provides a very lean and clean image to start with. DEBIAN_FRONTEND
to be non interactive. This will not display any interactive prompts during the build process. Since the docker build process is automated, we really don’t have any way to interact during it. So we disable interaction. And as you might have guessed already ENV
sets an environment variable.requirements.txt
file to /app/src/requirements.txt
, change the work directory and install the packages using pip. ADD
is used to copy any files or directories to the container while it builds. You might wonder why we didn’t copy over our entire project – that’s because we want to use docker for our development. We will use a nice featire of Docker which would allow us to mount our local directories directly inside the container. Doing this, we would not need to copy files every time they change. More on this will come later./app/src/lisp
and run the runall
management command. This command runs the Django default server along with some other services my application needs. But usually we would want to just do runserver
EXPOSE
port 8000If you go through the Dockerfile References you will notice – we can do a lot more with Dockerfiles.
As we mentioned earlier, we shall use pre-built images for MySQL and Redis. We could build them ourselves too but why not take advantage of the well maintained images from the generous folks in the docker community?
We can link multiple docker containers to compose a final application. We can do that using the docker
command manually. But Docker Compose is a very nice tool which allows us to define the services we need in a very easy to read syntax. With docker compose, we don’t need to run them manually, we can just use simple commands to do complex docker magic! Here’s our docker-compose.yml
file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
web: build: . restart: always volumes: - .:/app/src ports: - "8000:8000" links: - redis - mysql redis: image: redis:latest volumes: - /var/lib/redis ports: - "6379" mysql: image: mysql:latest volumes: - /var/lib/mysql ports: - "3306:3306" environment: MYSQL_ROOT_PASSWORD: su53RsEc53T! |
In our docker-compose file, we define 3 components:
build
key. We ask to restart always and define volumes to mount. .:/app/src
means – mount the current directory on my OS X as /app/src/
on the container. We also define which ports to expose and which containers should be linked with itimage
key. Please make sure the volume
paths exist and are accessible. You can consult the Compose File Reference for more details.
To run the application, we can do:
1 |
docker-compose up |
Please note, the Django server might throw errors if the MySQL / Redis server takes time to initialize. So I usually run them separately:
1 2 3 4 5 6 |
docker-compose start mysql docker-compose start redis # After some time docker-compose start web |
Our MySQL server is running on the IP of the Docker Machine. You need to use this IP address in your Django settings file. To get the IP of a docker machine, type in:
1 2 |
# Here `default` is the machine name docker-machine ip default |
We can pass a MYSQL_DATABASE
environment value to the mysql
image so the database is created when creating the service. Or we can also connect to the docker machine manually and create our databases.
Twitter allows us to download our Tweets from the account settings page. Once we request our archive, Twitter will take some time to prepare it and send us an email once this is ready. We will get a download link in the email. After unpacking the archive, we shall find a csv file that contains our tweets – tweets.csv
. The archive also contains a html page (index.html
) that displays our tweets on a nice UI. While this is nice to look at, our primary objective is to extract the links from our tweets.
If we look at the CSV file closely, we shall find a field named expanded_urls
which generally contains the urls we use in our tweets. We will work with the values in this field. With the url, we also want to fetch their title. For this we will use Python 3 (I am using 3.5) and we need the requests
and beautifulsoup4
packages to download and parse the pages. Let’s install them:
1 2 |
pip install requests pip install beautifulsoup4 |
We will follow these steps to extract links and their page titles from the tweets:
expanded_urls
fieldrequests
library. If the page doesn’t return a HTTP 200, we ignore the responseNow let’s convert these steps to codes. Here’s the final script I came up with:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
import csv import requests from bs4 import BeautifulSoup DOMAINS_TO_SKIP = ['twitter.com'] with open('tweets.csv', 'r') as csvfile: reader = csv.DictReader(csvfile) # each row is a tweet for row in reader: url_string = row.get('expanded_urls') urls = url_string.split(",") for url in urls: # Skip the domains skip = False for domain in DOMAINS_TO_SKIP: if domain in url: skip = True break # fetch the title if url and not skip: print("Crawling: {}".format(url)) resp = requests.get(url) if resp.status_code == 200: soup = BeautifulSoup(resp.content, "html.parser") if soup.title: print("Title: {}".format(soup.title.string)) |
I am actually using this for a personal project I am doing here – https://github.com/masnun/bookmarks – it’s basically a bare bone django admin app where I intend to store the links I visit/share. I come across a lot of interesting projects, articles, videos and then later lose track of them. Hope this app will remedy that. This piece of code is part of a twitter import functionality of the mentioned app.
We know we can do a lot of async stuff with asyncio
but have you ever wondered how to execute blocking codes with it? It’s pretty simple actually, asyncio
allows us to run blocking code using BaseEventLoop.run_in_executor
method. It will run our functions in parallel and provide us with Future
objects which we can await
or yield from
.
Let’s see an example with the popular requests
library:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
import asyncio import requests loop = asyncio.get_event_loop() def get_html(url): return loop.run_in_executor(None, requests.get, url) @asyncio.coroutine def main(): resp1 = yield from get_html("http://masnun.com") resp2 = yield from get_html("http://python.org") print(resp2, resp1) loop.run_until_complete(main()) |
If you run the code snippet, you can see how the two responses are fetched asynchronously 🙂
We want to create a bot that will track specific topics and retweet them. We shall use the Twitter Streaming API to track topics. We will use the popular tweepy package to interact with Twitter.
Let’s first install Tweepy
1 |
pip install tweepy |
We need to create a Twitter app and get the tokens. We can do that from : https://apps.twitter.com/.
Now let’s see the codes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
from tweepy.streaming import StreamListener from tweepy import OAuthHandler, API from tweepy import Stream import json import logging import warnings from pprint import pprint warnings.filterwarnings("ignore") access_token = "4464294380-GHJ3PY...lEqzZ8FikULPnkxaS4huT" access_token_secret = "L033NEsDOA...tZVoaU8prpFbiREWhcVROw2" consumer_key = "D1G0bn...9Y88p" consumer_secret = "rGJjCU...FRISsiMURYlqCXJvOP" auth_handler = OAuthHandler(consumer_key, consumer_secret) auth_handler.set_access_token(access_token, access_token_secret) twitter_client = API(auth_handler) logging.getLogger("main").setLevel(logging.INFO) AVOID = ["monty", "leather", "skin", "bag", "blood", "bite"] class PyStreamListener(StreamListener): def on_data(self, data): tweet = json.loads(data) try: publish = True for word in AVOID: if word in tweet['text'].lower(): logging.info("SKIPPED FOR {}".format(word)) publish = False if tweet.get('lang') and tweet.get('lang') != 'en': publish = False if publish: twitter_client.retweet(tweet['id']) logging.debug("RT: {}".format(tweet['text'])) except Exception as ex: logging.error(ex) return True def on_error(self, status): print status if __name__ == '__main__': listener = PyStreamListener() stream = Stream(auth_handler, listener) stream.filter(track=['python', 'django', 'kivy', 'scrapy']) |
The code is pretty much self explanatory:
StreamListener
to implement our own on_data
methodStream
by passing the auth handler and the listenertrack
method to track a number of topics we are interested inon_data
method where we parse the tweet, check some common words to avoid, check language and then retweet it.