Skip to main content

harry palmer

harry palmer's Public Library

    • 2. Python application

       

      Libraries required

       

      Libraries required for this application are:

        

      You can install them manually or using pip command:

    • Note: The below assumes that you want to grab multiple tweets and find their locations. If you don't, and you just want a single tweet by it's id, use statuses/show.

       
        

      It's entirely possible - if the user has enabled location for their tweet. It'll be the value of the coordinates key, which will be null if they haven't.

        

      Let's say you're using the following API method: statuses/user_timeline.

        
         
      • This is a GET request
      •  
      • The resource url is: https://api.twitter.com/1.1/statuses/user_timeline.json
      •  
        

      You can specify either the user_id or screen_name as part of the GET parameters, for example: ?screen_name=J7mbo.

        

      In the tweet results, once json_decode() is run on them, one of the keys will look like this:

        

      enter image description here

        

      This is actually the first key, according to the documentation, so you should be able to find it and it's value pretty easily.

        

      Warning

        

      Do not get confused with the location sub-key underneath the User key. This is the location of the user, as per their profile, not the location of any specific tweet. Use the coordinates key for that.

        

Feb 26, 15

"Hash history with jQuery BBQ

As cool as Isotope is, the only thing that could make it even cooler would be adding bookmark-able URL hashes. Ben Alman’s jQuery BBQ allows us to do just that.

jQuery BBQ leverages the HTML5 hashchange event to allow simple, yet powerful bookmarkable #hash history.

See Demo: Hash history

BBQ is a marvelous plugin that provides for a lot more functionality. The hash history demo uses multiple options (sortBy, sortAscending, and layoutMode in addition to filter), the ability to use back-button history, and properly highlights selected links.

Given BBQ’s tremendous capabilities, the code can grow to be a bit complex. Be sure to read through BBQ’s docs and take look at its examples before you dive in and code up your own solution."

Feb 26, 15

"CREATING AN INTERACTIVE CHART WITH D3

Andy Aiken
Published
19 Sep 2014
0 4
Recently I’ve been looking at various D3 components, which has been a fun project. I haven’t yet had the chance to develop an interactive, dynamic component though, which has meant that the resulting charts have been sadly static. For this article I wanted to use what I’ve learned to build a fully interactive chart - something that wouldn’t look out of place on a financial app.

Here’s the chart we’re going to build:"

  •  
     

    In this article, I will explain the JavaScript prototype inheritance. Perhaps the article will not include all the possible cases related to this topic, but it will provide a clear picture of it.

     

    Prototype-Based Programming

     

    The concept of object-oriented in JavaScript is different from the one in programming languages like Java, C#, C++, etc. JavaScript is a prototype-based language. In JavaScript, you don’t find the class keyword that you can find in other languages. JavaScript uses functions as classes.

     

    Example:

  • Mining the Social Web, 2nd Edition

      

    This IPython Notebook provides an interactive way to follow along with and explore the numbered examples from Mining the Social Web (2nd Edition). The intent behind this notebook is to reinforce the concepts from the sample code in a fun, convenient, and effective way. This notebook assumes that you are reading along with the book and have the context of the discussion as you work through these exercises.

     

    In the somewhat unlikely event that you've somehow stumbled across this notebook outside of its context on GitHub, you can find the full source code repository here.

      

    You are free to use or adapt this notebook for any purpose you'd like. However, please respect the Simplified BSD License that governs its use.

     
     
     
     
     
     
     
     
     

    Twitter API Access

     

    Twitter implements OAuth 1.0A as its standard authentication mechanism, and in order to use it to make requests to Twitter's API, you'll need to go to https://dev.twitter.com/apps and create a sample application. There are four primary identifiers you'll need to note for an OAuth 1.0A workflow: consumer key, consumer secret, access token, and access token secret. Note that you will need an ordinary Twitter account in order to login, create an app, and get these credentials.

  • Mining the Social Web, 2nd Edition

     

    Chapter 2: Mining Facebook: Analyzing Fan Pages, Examining Friendships, and More

     

    This IPython Notebook provides an interactive way to follow along with and explore the numbered examples from Mining the Social Web (2nd Edition). The intent behind this notebook is to reinforce the concepts from the sample code in a fun, convenient, and effective way. This notebook assumes that you are reading along with the book and have the context of the discussion as you work through these exercises.

     

    In the somewhat unlikely event that you've somehow stumbled across this notebook outside of its context on GitHub, you can find the full source code repository here.

      

    You are free to use or adapt this notebook for any purpose you'd like. However, please respect the Simplified BSD License that governs its use.

     
     
     
     
     
     
     
     
     

    Facebook API Access

     

    Facebook implements OAuth 2.0 as its standard authentication mechanism, but provides a convenient way for you to get an access token for development purposes, and we'll opt to take advantage of that convenience in this notebook. For details on implementing an OAuth flow with Facebook (all from within IPython Notebook), see the _AppendixB notebook from the IPython Notebook Dashboard.

     

    For this first example, login to your Facebook account and go to https://developers.facebook.com/tools/explorer/ to obtain and set permissions for an access token that you will need to define in the code cell defining the ACCESS_TOKEN variable below.

     

    Be sure to explore the permissions that are available by clicking on the "Get Access Token" button that's on the page and exploring all of the tabs available. For example, you will need to set the "friends_likes" option under the "Friends Data Permissions" since this permission is used by the script below but is not a basic permission and is not enabled by default.

  • Example 15. Serializing a NetworkX graph to a file for consumption by D3

     
     
     
      
     
     
    In [ ]:
     
     
     
    from networkx.readwrite import json_graph  nld = json_graph.node_link_data(nxg)  json.dump(nld, open('resources/ch02-facebook/viz/force.json','w')) 
      
     
     
      
     
     
     
     
     
     

    Note: You may need to implement some filtering on the NetworkX graph before writing it out to a file for display in D3, and for more than dozens of nodes, it may not be reasonable to render a meaningful visualization without some JavaScript hacking on its parameters. View the JavaScript source in force.html for some of the details.

  • Recently I've been working with recommender systems and association analysis.  This last one, specially, is one of the most used machine learning algorithms to extract from large datasets hidden relationships. 
     
     
     
     The famous example related to the study of association analysis is the history of the baby diapers and beers. This history reports that a certain grocery store in the Midwest of the United States increased their beers sells by putting them near where the stippers were placed. In fact, what happened is that the association rules  pointed out that men bought diapers and beers on Thursdays. So the store could have profited by placing those products together, which would increase the sales.
     
     
     
     Association analysis is the task of finding interesting relationships in large data sets. There hidden relationships are then expressed as a collection of association rules and frequent item sets.  Frequent item sets are simply a collection of items that frequently occur together.  And association rules suggest a strong relationship that exists between two items.  Let's  illustrate these two concepts with an example.

    • Data mining is the extraction of implicit, previously unknown, and potentially useful information from data. It is applied in a wide range of domains and its techniques have become fundamental for several applications. 

       This Refcard is about the tools used in practical Data Mining for finding and describing structural patterns in data using Python. In recent years, Python has become more and more used for the development of data centric applications thanks to the support of a large scientific computing community and to the increasing number of libraries available for data analysis. In particular, we will see how to: 

        
         
      • Import and visualize data
      •  
      • Classify and cluster data
      •  
      • Discover relationships in the data using regression and correlation measures
      •  
      • Reduce the dimensionality of the data in order to compress and visualize the information it brings
      •  
      • Analyze structured data
      •  
        

      Each topic will be covered by code examples based on four of the major Python libraries for data analysis and manipulation: numpy, matplotlib,sklearn and networkx.

        

  • PubNub recently released a demo Twitter stream published through their hosted websockets. The stream can be consumed on a variety of platforms using one of their many SDKs. In Part 1 of this two-part series, guest blogger and PubNub Developer Advocate Tomomi Imura will walk you through creating a cartographic visualization using D3, Twitter data and the PubNub API.

     

    If you’re trying to build awesome web applications by processing Twitter streaming data, wouldn’t it be great if you could skip the complicated process of long polling and just write front-end code with JavaScript? The @PubNub real-time public Twitter stream makes that possible for you.

     

    The PubNub data stream network allows you to send and receive JSON data on any device using persistent socket connections. Now, with PubNub’s newly announced real-time Twitter stream, you can consume this public data without hassle.

  • What can be done?

     

    This is obviously not a positive factor for my switch from R to Python, and I’m hoping it’s just that I’ve done something wrong. However, another explanation is that whatever Seaborn is using to do the bootstrapping or logistic model fitting is just far less optimised than ggplot2’s backend in R.

     

    The nice thing about open source software is that we can help to make this better. So if you’re a code guru who’s reading this and wants to contribute to the scientific computing world moving ever faster to Python, go fork the github repo now!

     

    Update

     

    After I posted this, I opened an issue on Github asking the developers about the slow times. Turns out that the ci flag in sns.lmplot specifies confidence intervals for the logistic regression, which is also bootstrapped. Bootstrapping a logistic regression takes a while; setting ci=False means that Seaborn now takes about 7 seconds to produce that plot instead of 2 minutes.

     

    So, hooray for Seaborn and for awesome open source developers!

Feb 23, 15

"I installed Python version 3.4.2 using pyenv on Ubuntu 14.04, I then installed pyside:

$ pip install pyside
and then installed numpy and matplotlib:

$ pip install numpy
$ pip install matplotlib
If I now try to import matplotlib from ipython:

In [1]: import matplotlib
/home/hakon/.pyenv/versions/3.4.2/lib/python3.4/site-packages/matplotlib/__init__.py:1039: UserWarning: Bad val "pyside" on line #39
"backend : pyside
"
in file "/home/hakon/.pyenv/versions/3.4.2/lib/python3.4/site-packages/matplotlib/mpl-data/matplotlibrc"
Key backend: Unrecognized backend string "pyside": valid strings are ['emf', 'GTK', 'GTK3Agg', 'nbAgg', 'CocoaAgg', 'GTKAgg', 'pgf', 'agg', 'Qt4Agg', 'pdf', 'ps', 'cairo', 'MacOSX', 'WX', 'WebAgg', 'gdk', 'svg', 'TkAgg', 'GTK3Cairo', 'template', 'Qt5Agg', 'WXAgg', 'GTKCairo']
(val, error_details, msg))
If I edit the matplotlib cofiguration file: /home/hakon/.pyenv/versions/3.4.2/lib/python3.4/site-packages/matplotlib/mpl-data/matplotlibrc: I can see that it has a line:

backend : pyside
If I change this to:

backend : Qt4Agg
backend.qt4 : PySide
It works fine..

The question is: Why does the matplotlibrc file have an invalid backend (pyside) value in the first place?"

    • stared     commented        
       
         
        
       
       

      I had same problems with pairplot. It took me some time to discover its is a problem with the library, not - data.

        

      Upgrading matplotlib did solve it (with 1.3.1 it was not working, with 1.4.2 it is). So please, ether

        
         
      • put a given version of matplotlib in install requirements, or
      •  
      • make the error explicit.
      •  
        

      Otherwise its confusing and counterproductive.

  • Seaborn is great for exploratory analysis.

      

    Python blaze and dask both have no R parallel for a consistent array and chunking interface to a variety of backends. Pretty innovative if you ask me.

      

    Super easy to create and app with blaze and bokeh.

  • For that sort of thing on the python side I'm a fan of Seaborn along with Ipython notebooks interact utilities ... it's easy to set up the ability to vary faceting, binning, or, well, anything with appropriate sliders and drop downs to let you really play with ease (it shortens the experimentation loop).
  • The vast majority of your time spent on entry-level data analysis will be spent taking sloppy data from various sources and formats and transforming it into something clean that can be imported into a database. You'll be writing quite a bit of "throw away" code for this since it mostly deals with idiosyncrasies and ad hoc situations, which... writing quick throw-away scripts is one of the things Python is very good at being used for.

      

    The only things I wouldn't use Python for are enterprise-level applications where Java or .NET would be more practical and durable, or data visualization for which JavaScript and D3js is currently the champion of (unless you're talking about database reporting, but that's another topic).

  • You either love it or hate it. Poetry is one of those things that if you love it, it can be pretty easy to go about memorizing. But if you hate it, or just don’t connect with a particular piece, it can be downright nasty.

     

    Your whole life you’ve probably approached memorizing poetry by just repeating lines and phrases over and over and over and over and over (and over!) again until it sticks. That definitely works but a). it’s extremely inefficient and b). it’ll make you want to gouge your eyes out from boredom. Memory techniques are the way to go. Don’t believe me? Ask Ed Cooke, UK memory grandmaster. He used them to memorize massive portions of the epic poem “Paradise Lost,” a monster of a poem that has 10,000+ individual lines of verse. Ouch. I’ve also used the same techniques to memorize 50 line poems in 15 minutes (in competition), some easy and fun, some abstract and boring.

     

    There are a few ways to go about memorizing text. The first, is to do it rotely – by pure repetition. If you have the time and you enjoy torture, this is the method for you. But I dont recommend it at all. Because….torture is not fun. It’s torture.

     

    The second approach is to use a journey or memory palace to store pieces of the text. Once you’ve wisened up and decided to use this approach, there’s a bit of personal preference that comes into play: how much information should I store at each loci? Each and every word? Groups of words? Lines? Verses? Key words and topics here and there? Opinions may differ given that some people are better at remembering different amounts of things. For this blog post I’m going to share my personal preferences since it has allowed me to be pretty successful at it – I tied the USA record for memorizing the most lines of a poem this past year (given the fact that I actually hate memorizing poetry, text, or lyrics, I’d say that’s pretty good; check out my graded papers below).

Feb 16, 15

"Beginner Kettlebell Training Program with Terence Gore

This program is easily adaptable for any fitness level. The key to making it work is to choose the appropriate weight, and to make any adaptations necessary to make the movements safe and appropriately difficult. For most of the movements that follow, we’ve included adaptions for beginner and advanced practice, and the primary movements described should work well for anyone at a level in between.
"

  • Beginner Kettlebell Training Program with Terence Gore

     

    This program is easily adaptable for any fitness level. The key to making it work is to choose the appropriate weight, and to make any adaptations necessary to make the movements safe and appropriately difficult. For most of the movements that follow, we’ve included adaptions for beginner and advanced practice, and the primary movements described should work well for anyone at a level in between.

     

  • The Problem with Choosing a Starting Kettlebell Weight Onnit 12kg Kettlebell

     

    There are different problems with picking a kettlebell weight depending on your weight training experience. If you have never trained with weights before, you are more likely to think that the beginner weights that I suggest are too heavy. Conversely, if you are very familiar with weight training and have used it for years, you will most likely think that the weights I suggest are too light.

     

    What I need you to do is throw away your current perception of weight training and look at the kettlebell as something that is totally new and different; for that reason, you cannot have an opinion of the weight you need, period. You must do what every trainer in the world hopes you will do: be open, listen, and learn.

     

    While you may not think you need to, having at least one session with a trained kettlebell professional will make an enormous difference in your results. Kettlebell training is very different from standard isolation training; you will be using multiple muscle groups at the same time through ballistic, full-body movements.

     

    Most likely, you have never trained like this before. A kettlebell professional should be able to show you the basics (for example, the Clean, Swing, Goblet Squat, Windmill, and Turkish Get Up). While you may not perfect the form, the trainer will give you tips to get started, as well as how to avoid injury. GET A TRAINER WHEN YOU START.

1 - 20 of 8146 Next › Last »
20 items/page

Diigo is about better ways to research, share and collaborate on information. Learn more »

Join Diigo