How to Do Trello Integrations with Zapier

Data is everywhere. Instapaper, To Do tool, Trello list, Google Sheets – we use different online tools for different purposes and sometimes we need to move information from one system to another. For example when we update Trello list, we might want information flow to Google Sheets for some data analysis.

This can be achieved with Zapier which is an online automation tool that connects your favorite apps, such as Gmail, Slack, MailChimp, and over 1,000 more. You do not need to do any programming to connect two apps to automate repetitive tasks.

Bellow will be shown how to connect Trello with Google Sheets.

Prepare Google Sheets.
On the Google Sheets we need to create first row with the header names for columns where you want data to flow.

Here I created row with the columns Board, List, Card, Name, Description, Comment, Label, Due Date.

Configuring Trello Settings.
Now we go to Zapier site and start to create Zap by selecting Make Zap.
We first choose application 1 which will be Trello Application.
Then we select Trigger for example New Activity.

Now we select Trello account, confirm connection, select activity (required). When selecting activity we can select board, list, card. It will then ask to provide sample of activity – go to Trello and create new card with some text.
Then when back to creating zap, pick a sample and it will set for you.

Configuring Google Sheets Settings
Now we select 2nd application – Google Sheets.
We need to specify action, for example – create SpreadSheet Row. This means that trigger action on Trello will cause creating row on Google Sheets.

Now it will ask us connect to Google Sheets
Then we can specify mappings between Trello and Google Sheets fields.

After successful testing our Trello integration with Google Sheets we can click Finish button and turn on our Zap.

Every 15 mins it will check for new activity in Trello and send the data to Google Sheets per our zap.


Getting Reddit Data with Python

In the previous post How to Get Submission and Comments with Python Reddit API Wrapper – PRAW I put how to use Python Reddit API Wrapper for getting information from Reddit. In this post we review few more ways to get data from Reddit.

I did search on the web and found the following python script on github. It is using BeautifulSoup python library for parsing HTML and urllib.request python library for opening reddit url.

As per documentation, The urllib.request module defines functions and classes which help in opening URLs (mostly HTTP) in a complex world — basic and digest authentication, redirections, cookies and more.

Another available example is at Scraping Reddit with Python and BeautifulSoup 4 It is using BeautifulSoup for HTML web parsing. For opening url it is using requests python library. The requests module uses urllib3 under the hood and provides a slightly more higher level and simpler API on top of it. From multiple discussions on the web it is recommended use requests library.

There are many ways to get information from the web. Instead of using information from web pages, we can utilize website RSS (assuming it is available). For this we would need feedparser – python library for parsing Atom and RSS feeds.
To install feedparser run the command: pip install feedparser

Here is the example how to use feedparser to get data from reddit rss link:

import feedparser

d = feedparser.parse('')

# print all posts
count = 1
blockcount = 1
for post in d.entries:
    if count % 5 == 1:
        print ("-----------------------------------------\n")
        blockcount += 1
    print (post.title + "\n")
    count += 1

Thus we reviewed several more ways to get information from Reddit. We can use these methods for different websites too – just need to replace url link. Below you can find few more links how to do scraping jobs from web pages.
Feel free to put comments or feedback.

1. RedditNewsAggregator
2. Scraping Reddit with Python and BeautifulSoup 4
3. A simple Python feedparser script
4. Requests Documentation
5. What is the practical difference between these two ways of making web connections in Python?
6. Beginners guide to Web Scraping: Part 2 – Build a web scraper for Reddit using Python and BeautifulSoup
7. Scraping Reddit data

How to Get Submission and Comments with Python Reddit API Wrapper – PRAW.

According to Alexa [1] people spent more time on Reddit than on Facebook, Instagramm or Youtube. Users use Reddit to post questions, share content or ideas and discuss topics. So it is very interesting to extract automatically text data from this web service. We will look how to do this with PRAW – The Python Reddit API Wrapper.[2]

The example of how to get API key and use python PRAW API can be found at How to scrape reddit with python It is however is not adding all comments, that might be attached to submission. Comments can have important information so I decided to build the python script with PRAW API that is modified from above link for adding comments and few minor things.

To get comments we first need to obtain a submission object.
With a submission object we can then like below:

comment_body = ""
    for comment in submission.comments.list():
        comment_body =  comment_body + comment.body + "\n"

If we wanted to output only the body of the top level comments in the thread we could do:

for top_level_comment in submission.comments:

Here is the full python script of API example that can get Reddit information including comments. Note that as we only downloading data and not changing anything, we do not need user name and password. But in case you modifying data on reddit, you would need include login information too.

To install module:
pip install praw

import praw
import pandas as pd
from datetime import datetime

reddit = praw.Reddit(client_id='xxxxxxxx', \
                     client_secret='xxxxxxxx', \
                     user_agent='personal use script')   
                     ##username='YOUR_REDDIT_USER_NAME', \

def get_yyyy_mm_dd_from_utc(dt):
    date = datetime.utcfromtimestamp(dt)
    return str(date.year) + "-" + str(date.month) + "-" + str(

subreddit = reddit.subreddit('learnmachinelearning')

top_subreddit =

topics_dict = { "title":[], "score":[], "id":[], "url":[], \
                "comms_num": [], "created": [],  "body":[], "z_comments":[]}

for submission in top_subreddit:
    all_comments = submission.comments.list()
    print (all_comments)
    comment_body = ""
    for comment in submission.comments.list():
        comment_body =  comment_body + comment.body + "\n"
    topics_dict["z_comments"].append (comment_body)

topics_data = pd.DataFrame(topics_dict)

topics_data.to_csv('Reddit_data.csv', index=False) 

1. The top 500 sites on the web
3. How to scrape reddit with python
4. Tutorials
5. Webscraping Reddit — Python Reddit API Wrapper (PRAW) Tutorial for Windows

Wallabag – Productivity App for Read It Later Saved Articles

With so much information on the web, many of you probably use read it later web applications such as Pocket, Instapaper, Wallabag. I recently discovered and started to use Wallabag application. In this post I will share some thoughts about how Wallabag can help stay more productive.

Wallabag is much better option than using default web browser bookmarks, files or folders with the links or saved pages. While Google allows to retrieve any link later, it takes time to find the link that you saw before. You also do not want every time to research same topic for links. More effectively is just search one time, save links and later use saved links.

Below are few ideas how to use Wallabag in order to get most of it and to be more productive.

1. Use tags to label most interesting or most useful pages (links). Tags may be related to the content. They also can include next action that you want to do with the content on the page. This way later you can easy locate links that you need.

2. Once you read page mark it as read and this will put in archive. If you just reviewed quickly page and not going read it later, still put it archive. That way you will have manageable number of unread links that you can act on.

3. Do not collect many links without any action. Try to make action on links and archive not needed or processed links asap.

4. If you want add notes not connected with any links, just use outside storage like Google Drive which give you link to enter to Wallabag.

5. Use as much as possible Wallabag functionality. There are features like tagging rules, export, import, rss that can be very useful.

6. With Wallabag API you can increase even more in case you like doing web developing or hacking. For example it would be nice extract notes or summary of content that you read each month or quarter. There are some API examples that I put as starting point:
Python API Example with Wallabag Web Application for Extracting Entries and Quotes

I use my own self hosted instance at here You are welcome to try it and / or play with API.

If you have ideas, comments or suggestions I would love to hear.

Wallabag on github
Wallabag: An open source alternative to Pocket

Python API Example with Wallabag Web Application for Extracting Entries and Quotes

python and wallabag

In the previous post Python API Example with Wallabag Web Application we explored how to connect via Web API to Wallabag and make entry to Wallabag web application. For this we setup API, obtained token via python script and then created entry (added link).

In this post we will extract entries through Web API with python script. From entry we will extract needed information such as id of entry. Then for this id we will look how to extract annotations and quotes.

Wallabag is read it later web application like Pocket or Instapaper. Quotes are some texts that we highlight within Wallabag. Annotations are our notes that we can save together with annotations. For one entry we can have several quotes / annotations. Wallabag is open source software so you can download it and install it locally or remotely on web server.

If you did not setup API you need first setup API to run code below. See previous post how to do this.
The beginning of script should be also same as before – as we need first provide our credentials and obtain token.

Obtaining Entries

After obtaining token we move to actual downloading data. We can obtain entries using below code:

p = {'archive': 0 , 'starred': 0, 'access_token': access}
r = requests.get('{}/api/entries.txt'.format(HOST), p)

p is holding parameters that allow to limit our output.
The return data is json structure with a lot of information including entries. It does not include all entries. It divides entries in set of 30 per page and it provides link to next page. So we can extract next page link and then extract entries again.

Each entry has link, id and some other information.

Obtaining Annotations / Quotes

To extract annotations, quotes we can use this code:

p = {'access_token': access}
link = '{}/api/annotations/' + str(data['_embedded']['items'][3]['id']) + '.txt'
print (link)
r = requests.get(link.format(HOST), p)

Full Python Source Code

Below is full script example:

# Extract entries using wallabag API and Python
# Extract quotes and annotations for specific entry
# Save information to files
import requests
import json

# only these 5 variables have to be set
#HOST = ''
USERNAME = 'xxxxxx'
PASSWORD = 'xxxxxx'
CLIENTID = 'xxxxxxxxxxxx'
SECRET = 'xxxxxxxxxxx'
HOST = ''    

gettoken = {'username': USERNAME, 'password': PASSWORD, 'client_id': CLIENTID, 'client_secret': SECRET, 'grant_type': 'password'}
print (gettoken)

r ='{}/oauth/v2/token'.format(HOST), gettoken)
print (r.content)

access = r.json().get('access_token')

p = {'archive': 0 , 'starred': 0, 'access_token': access}
r = requests.get('{}/api/entries.txt'.format(HOST), p)

print (type(data))

with open('data1.json', 'w') as f:  # writing JSON object
      json.dump(data, f)

for key, value in data.items():
     print (key, value)
#Below how to access needed information at page level like next link
#and at entry level like id, url for specific 3rd entry (counting from 0)      
print (data['_links']['next']) 
print (data['pages'])
print (data['page']) 
print (data['_embedded']['items'][3]['id'])  
print (data['_embedded']['items'][3]['url'])  
print (data['_embedded']['items'][3]['annotations'])

p = {'access_token': access}

link = '{}/api/annotations/' + str(data['_embedded']['items'][3]['id']) + '.txt'
print (link)
r = requests.get(link.format(HOST), p)
with open('data2.json', 'w') as f:  # writing JSON object
      json.dump(data, f)

#Below how to access first and second annotation / quote
#assuming they exist 
print (data['rows'][0]['quote']) 
print (data['rows'][0]['text']) 
print (data['rows'][1]['quote'])    
print (data['rows'][1]['text'])


In this post we learned how to use Wallabag API to download entries, annotations and quotes. To do this we first downloaded entries and ids. Then we downloaded annotations and quotes for specific entry id. Additionally we learned some json python and json examples to get needed information from retrieved data.

Feel free to provide feedback or ask related questions.

Python API Example with Wallabag Web Application

python and wallabag


Many a times it happens that we need to create API to post data to some web application using python framework.
To over come this problem of sending data to application from outside of it, using API, I am going to show how you can do this for Wallabag Web Application. Wallabag is Read It Later type of application, where you can save website links, and then read later.

Thus we will look here how to write API script that can send information to Wallabag web based application.

To do this we need access to Wallabag. It is open open source project (MIT license) so you can download and install as self hosted service.

Collecting Information

Once we installed Wallabag or got access to it, we will collect information needed for authorization.
Go to Wallabag application, then API Client Management Tab and create the client.
Note client id, client secret.
See below screenshot for references.

Python API Example Script

Now we can go to python IDE and write the script as below. Here is base url, wallabag is an installation folder where we installed application.

import requests

# below 5 variables have to be set

USERNAME = 'xxxxxxxx'
PASSWORD = 'xxxxxxxx'
CLIENTID = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxx'
SECRET = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
HOST = ''    

gettoken = {'username': USERNAME, 'password': PASSWORD, 'client_id': CLIENTID, 'client_secret': SECRET, 'grant_type': 'password'}
print (gettoken)
r ='{}/oauth/v2/token'.format(HOST), gettoken)
print (r.content)

access = r.json().get('access_token')

url = '' # URL of the article
# should the article be already read? 0 or 1 for archive
# should the article be added as favorited? 0 or 1 for starred

article = {'url': url, 'archive': 0 , 'starred': 0, 'access_token': access}
r ='{}/api/entries.json'.format(HOST), article)

{'username': 'xxxxxxxxx', 'password': 'xxxxxxxxxx', 'grant_type': 'password', 'client_id': 'xxxxxxxxxxxxxxx', 'client_secret': 'xxxxxxxxxxx'}


I found useful to include print (r.content) in case something goes wrong. It can help see what is returned by sever.
Also it helped me looking at log which is located at /var/logs/prod.log under In case something is going wrong it might have some clue in the log.


We looked at python api example of how to integrate python script with Wallabag web application and send data using Wallabag API and python requests library.


wallabag api example

How to Create Custom Plugin in WordPress to Get Referrer URL

Seventy-four percent of online consumers get frustrated when a website promotes content that isn’t applicable to their interests. [1] How can we know customer interests? When new user arrives to website we can get the previous site url that the user visited. Such previously visited page by web visitor is called referrer. If we have browsing history of users from the same or similar websites we can use some patterns for new users.

Since my site is using WordPress platform I decided to build wordpress plugin to be able manipulate content or ads based on web user referrer url. (In the code this term is used as referer) This is my first WordPress plugin and I found creation of it is much easy than I thought.

How to Use Plugin

One of possible use of this wordpress custom plugin (in pseudo code):

if referrer url is google
     show some ad for google visitors 
if referrer url is website ABC
     show some different ad or content
if none of the above
     show some default ad

Below I put the steps that I used to create custom plugin in wordpress. This plugin is just getting user referrer and displaying this information. You would need insert some actions based on referrer url as per example in pseudo code above.

Steps How to Create Custom Plugin in WordPress to Get Referrer URL

1. Go to your web hosting account where wordpress is installed and create folder get_referer_widget under plugins folder. Put blank file init.php in the new created folder. So here how the folder structure will look:

2. In the file init.php we need define name of our plugin, class and register our plugin. The full php source code for file init.php is provided in the end of this post.

3. To get referer we can use wordpress function wp_get_referer:

if ( wp_get_referer() )
    echo ( wp_get_referer() );
    //  based on referer show some ads or content
   echo "default action";
  // do default action as referrer url is not available

4. Go to plugins section in wordpress admin and activate newly installed plugin.

get_referer_plugin widget activation
get_referer_plugin widget activation

5. Go to widgets tab and add widget Get Referer Widget for some available area.

Plugin screen view with installed get_referer_widget plugin
Get_referer_widget plugin on the widgets screen

6. Test widget: first time when you access page it will show default option, but then if you click some link on this page you should see the link of this previous page.

7. Now when testing is completed you can put actual actions in it.
8. If you want to stop use this wordpress custom plugin you can deactivate it in wordpress plugin manager.

Here is the source code for init.php

Plugin Name: Get Referer Widget
Description: This plugin provides a simple widget that shows the referrer url of user
class get_referer_widget extends WP_Widget {
    function Get_Referer_Widget() {
    parent::WP_Widget(false,'Get Referer Widget');
    function widget($args) {
    if ( wp_get_referer() )
        $msg= wp_get_referer() ;
       $msg = "default";

        echo $before_widget;
        echo "<p>$msg</p>";
        echo $after_widget;
    function register_get_referer_widget() {
    add_action('widgets_init', 'register_get_referer_widget');

We have seen how to create custom plugin in wordpress to get referrer url. Referring website url can be used to show different ad or content. It can be also used for referral analytics.

1. website-personalization-examples-dynamic
2. wp_get_referer
3. How to Create a Custom WordPress Widget
4. WordPress for Dummies. Lisa Sabin-Wilson

Web API to Save to Pocket App and Instapaper App

As we surf the web we find a lot of information that we might use later. We use different applications (Pocket app, Instapaper, Diigo, Evernote or other apps) to save links or notes what we find.

While many of the above applications have a lot of great features there still a lot of opportunities to automate some processes using web API that many of applications provide now.

This will allow to extend application functionality and eliminate some manual processes.

For Example: You have about 20 links that you want to send to pocket like application.
Another example: When you add link to one application you may be want also save link or note to Pocket app or to Instapaper application.
Or may be you want automatically (through script) extract links from some web sites and save them to your Pocket app.

In today post we will look at few examples that allow you start to do this. We will check how to use Pocket API and Instapaper API with python programming.

API for Pocket App

pocket API Pocket, previously known as Read It Later, is an application and service for managing a reading list of articles from the Internet. It is available on many different devices including web browsers. (Wikipedia)
There is great post[1] that is showing how to set up API for it. This post has detailed screenshots how to get all the needed identification information for successful login.

In summary you need get online consumer key for your api application then obtain token code via script. Then you can access the link that will include token and do authorization of application. After this you can use API for sending links.

Below is the summary python script to send the link to Pocket app including previous steps:

import requests

# can be any for link
redirect_link = ""
# obtain consume key online
#connect to pocket API to get token code
pocket_api ='',
         data = {'consumer_key':consumer_key,

pocket_api.status_code       #if 200, it means all ok.

print (pocket_api.text)

#remove 'code='
token= pocket_api.text[5:]
print (token)
url="" + token + "&redirect_uri=" + redirect_link 

import webbrowser
webbrowser.open_new(url) # opens in default browser
#click on Authorize button in webbrowser

# Once authoration done you can post as below (need uncomment code below)  
pocket_add ='',
       data= {'url': '',
              'access_token': token})
print (pocket_add.status_code)


API for Instapaper

Instapaper is a bookmarking service owned by Pinterest. It allows web content to be saved so it can be “read later” on a different device, such as an e-reader, smartphone, tablet. (Wikipedia)
Below is the code example how to send link to Instapaper. The code example is based on the script that I found on Internet [2]

import urllib, sys

def error(msg):

def main():
api = ''

params = urllib.parse.urlencode({
'username' : "actual_user_name",
'password' : "actual_password",
'url' : "https://www.actual_url",
'title' : "actual_title",
'selection' : "description"


r = urllib.request.urlopen(api, params)

status = r.getcode()

if status == 201:

print('%s saved as %s' % (r.headers['Content-Location'], r.headers['X-Instapaper-Title']))
elif status == 400:
error('Status 400: Bad request or exceeded the rate limit. Probably missing a required parameter, such as url.')
elif status == 403:
error('Status 403: Invalid username or password.')
elif status == 500:
error('Status 500: The service encountered an error. Please try again later')

if __name__ == '__main__':

1. Add Pocket API using Python – Tutorial
2. Instapaper