In this article, I will show you how to do digital marketing with twitter and python for our Django web application. We are building a Django web application for posting jobs from different companies online. Users will be able to search for jobs directly from our platform, and apply for available jobs. At this stage when they apply – we just send them over to the specific company for them to complete the application.
After we have built our application and we are collecting job data on a regular basis, we would like to post these jobs on our social media channels so clients can discover the jobs that we post on our website – click on them, find our website and we can start to grow our brand.
Twitter Post Automation – Digital Marketing with Twitter and Python

In this article we will focus on creating a twitter bot to schedule posts from our jobs dataset. In this previous article, we discuss how to create a Twitter Bot with Python and Tweepy. We will continue with that basic knowledge from that tutorial, and apply the Tweepy API to schedule posts for our web application, job data.
Collecting and Cleaning the Data
Before we can create or schedule tweets with job data for our web application, we need to collect the data. We need to get the data that we are going to be putting in our tweets.
You can do this by (1) grabbing it from your own database or (2) scrapping it from source. There are pros and cons to every approach. Our web application database is built on Postgres, we could technically use our database credentials outside of the application and grab the data to use as we please. That would require writing SQL programming code to access the database.
The second alternative is the one we will use, we already have the code for scrapping the jobs websites – from this previous tutorial: Web and PDF scrapping for Django Web Application. All we need is to use that same data for tweets, instead of adding it to the database.
Rebuilding the web scrapping code
We are going to scrap the websites, and save the data in a JSON file on our server, this will then allow us to later read the JSON file to grab the data for tweets.
We will again be using BeautifulSoup and Requests.
This code will scrap the specific website to get a list of posted jobs, and save that list in a JSON file. We have added a service to shorten the URL – using Rebrandly. You can also create an account on Rebrandly: https://www.rebrandly.com/ for free and grab your API key to use in this code.
You will need to shorten your URL – in order to tweet it, as it takes close to 100 characters, if you do not shorten it, is will take up more than half of your available characters.
from bs4 import BeautifulSoup
from datetime import date
import requests
import json
import config
headers = {'User-agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0'}
baseUrl = 'https://careers.vodafone.com'
url = "https://careers.vodafone.com/key/vodacom-vacancies.html?q=vodacom+vacancies"
#Read a JSON File
def readJson(filename):
with open(filename, 'r') as fp:
return json.load(fp)
#Write a JSON file
def writeJson(filepath, data):
with open(filepath, 'w') as fp:
json.dump(data, fp)
def shortenUrl(linkUrl):
linkRequest = {
"destination": linkUrl,
"domain": { "fullName": "messages.careers-portal.co.za" },
#"slashtag": "",
}
requestHeaders = {
"Content-type": "application/json",
"apikey": 'enter-API-key-from-rebrandly';,
}
r = requests.post("https://api.rebrandly.com/v1/links",
data = json.dumps(linkRequest),
headers=requestHeaders)
if (r.status_code == requests.codes.ok):
link = r.json()
return link["shortUrl"]
else:
return ''
#Scan the website
def jobScan(link):
the_job = {}
jobUrl = '{}{}'.format(baseUrl, link['href'])
the_job['urlLink'] = shortenUrl(jobUrl)
job=requests.get(jobUrl, headers=headers)
jobC=job.content
jobSoup=BeautifulSoup(jobC, "html.parser")
to_test = jobSoup.find_all("div", {"class": "joblayouttoken displayDTM"})
if to_test == []:
return None
else:
title = jobSoup.find_all("h1")[0].text
the_job['title'] = title
the_divs = jobSoup.find_all("div", {"class": "joblayouttoken displayDTM"})
country = the_divs[1].find_all("span")[1].text
the_job['location'] = country
date_posted = the_divs[2].find_all("span")[1].text
the_job['date_posted'] = date_posted
full_part_time = the_divs[3].find_all("span")[1].text
the_job['type'] = 'Full Time'
contract_type = the_divs[4].find_all("span")[1].text
the_job['contract_type'] = contract_type
return the_job
r = requests.get(url, headers=headers)
c = r.content
soup = BeautifulSoup(c, "html.parser")
#Modified these lines here . . . .
table=[]
results = soup.find_all("a", {"class": "jobTitle-link"})
[table.append(x) for x in results if x not in table]
#run the scanner and get all the jobs . . . .
final_jobs = []
for x in table:
job = jobScan(x)
if job:
final_jobs.append(job)
#Save the data in a Json
filepath = 'absolute-path-to-where-you-want-to-save-the-file/vodacom.json'
writeJson(filepath, final_jobs)
Building the Tweet Message String
We will then read the files, to build a string from the data per job:
import json
def readJson(filename):
with open(filename, 'r') as fp:
return json.load(fp)
def writeJson(filepath, data):
with open(filepath, 'w') as fp:
json.dump(data, fp)
def grabVodacomJobs():
statuses = []
filepath = '/absolute-path-to-where-you-want-to-save-the-file/vodacom.json'
jobs = readJson(filepath)
for job in jobs:
status = 'Vodacom-{}{} Apply {}'.format(job['title'], job['contract_type'], job['urlLink']).replace('\n', '').replace(' ',' ')
statuses.append(status)
return statuses
Finally – we build our tweet and send it
Remember to get your API credentials from twitter and save them in a file called config.py in the same directory as this file we are sending tweets from.
import tweepy
import config
import random
vodaJobs = grabVodacomJobs()
hashtags = ["#hashtag1", "hashtag2", "hashtag3", "hashtag4", "hashtag5"]
def createTheAPI():
auth = tweepy.OAuthHandler(config.api_key, config.api_secret)
auth.set_access_token(config.access_token, config.access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
return api
def createVodacomTweet():
while True:
msg = random.choice(vodaJobs)
hashtags = getTags()
tags = random.sample(hashtags, 2)
if len(msg) < 110:
msg += ' '+tags[0]
msg += ' '+tags[1]
if len(msg) < 160:
return msg
break
def sendVodacomTweet():
api = createTheAPI()
tweet = createVodacomTweet()
api.update_status(tweet)
print('Just tweeted: {}'.format(tweet))
With these functions defined, when you want to send a tweet – all you need is to call the function: sendVodacomTweet()
Introduce Tweet Scheduling – for Digital Marketing with Twitter and Python
Being able to write code to automate a tweet is not very useful if you cannot pre-schedule the tweets. We are able to grab data from the internet, save it on our machine, clean it up and send it as a tweet.
What we need is to be able to do this automatically, on a schedule. For that we are going to use Python Schedule Library. Start by installing it: pip install schedule.
Add this code to introduce scheduling:
#At the top of the file
import schedule
#At the bottom of the file, after all the functions
schedule.every().day.at("13:31").do(sendVodacomTweet)
while True:
schedule.run_pending()
time.sleep(1)
You can also add to the schedule, functions for refreshing the data by re-scrapping the website and re-creating the JSON files with the data, so you are tweeting fresh data everyday. You are now ready to do digital marketing with twitter and python – for FREE. This code is written once and will work always, no monthly fees and payments.
Next Steps
You will want to deploy your code: If you have been following our tutorials, you should have a VPS, you can just host it there. If you do not have a Virtual Server, you can just get one here for as little as $5 >
https://m.do.co/c/7d9a2c75356d
The 5$ server will be enough for hosting a twitter script.

Leave a Reply