Showing posts with label Statistics. Show all posts
Showing posts with label Statistics. Show all posts

Friday, October 25, 2013

How to Learn Python and R, the Data Science Programming Languages, from Beginner to Intermediate and Advanced

The Data Science programming / analytics languages to know are, R and Python. If you're in Operations Research or another analytics field that somewhat fits under the "Data Science" hat, you: a) already know them really well, b) want to brush up on them, or c) you probably should learn them now. Here I compile my thinking on how to learn R and Python from Beginner to the Intermediate and Advanced levels, based on having tried some of these course materials.

Beginner (doing basic analysis)

R:

Computing for Data Analysis on Coursera and Youtube (weeks 1, 2, 3, 4), by Roger Peng from Johns Hopkins University

  • Summary: It covers the basics of conditioning and loop structures, R's syntax, debugging, Object Oriented Programming, performing basic tasks with R, such as importing data, basic statistical analysis, plotting and regular expressions. See syllabus for more.
  • Time commitment: 11~36 hours total, including: 
    • non-programmers: 4 weeks X [3 hours/week on video + 2~6 hours/week on exercises]
    • programmers: [3 hours of notes reading + 8~16 hours] on exercises
  • Advice for: 
    • non-programmers: Listen to all lectures (videos), make sure you understand all details, and do all the exercises to hone your skills. Programming is all about practicing. Doing the exercises are important. See below for "Advanced".
    • programmers: Don't bother with the videos, go straight to the lecture notes (link). Read the notes - much faster than the videos. if you don't understand anything, look up the video and watch, or google the topic. Then do all the exercises. You don't need me to tell you that practice is king (um, and cash too).

The swirl package within R, by the Biostatistics team at Johns Hopkins University
  • Summary: It aims to teach R and Statistics within the R environment itself, through a package called swirl. See the announcement here for more detailed info.
  • I haven't tried this, so I'm not sure how much time it takes or how good it is. However, I think it sounds pretty good, and deserves a mention. I was never a fan of reading books to learn a programming language. Show me the code, or in this case, let me write the code, and get involved, is much more, well, involving.

Python:

Google's Python course (link)
  • Summary: It's straight to the meat, no non-sense stuff, and covers all the important things. Suits my style. Enough said, so see the course page on the syllabus. 
  • Time commitment: 8-10 hours
    • including reading notes and doing exercises
  • Note, this is for experienced programmers. There are videos too, but don't bother. The notes on the course page are the same, and it always takes less time to read than watch.

Intermediate (building analytical models)

R:

Data Analysis with R on Coursera and Youtube (plus class notes), by Jeff Leek from Johns Hopkins University
  • Summary: It covers the full modelling cycle, from getting data, to structuring the analysis pipeline, exploring with graphs and statistical analysis, modelling (clustering, regression and trees), and model checking with simulation. It also talks about important statistical watch-outs like p-values, confidence intervals, multiple testing and bootstrapping. More syllabus here.
  • Time commitment: 32~56 hours
    • including 8 weeks X [2~3 hours/week videos + 2~4 hours/week exercises]


Forecasting using R (link), by Rob Hyndman from Monash University in Australia and Revolution Analytics (the enterprise R solution)
  • Summary: topics include "seasonality and trends, exponential smoothing, ARIMA modelling, dynamic regression and state space models, as well as forecast accuracy methods and forecast evaluation techniques such as cross-validation. Some recent developments in each of these areas will be explored" (quoted from course site). Read more there.
  • Note: I haven't done this (just started), so I'm not sure about its time requirement or quality. I'm also not sure if they are planning to make available the lectures. Time will tell on these questions.

Python / Octave:

Machine Learning on Coursera, by Andrew Ng from Stanford University --> My Favourite!
  • Summary: The course actually teaches in the Octave language, but it all can be done in Python. I suppose you can do it twice, first in Octave, and then in Python, if you've got the time. It certainly would solidify your understanding of the material, and Andrew Ng is sure that Octave is rather important in Machine Learning. It assumes some prior knowledge of linear algebra and probability, and refreshes you on some basics. "Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI)." (quoted from the course website)
  • Time commitment: 50~90 hours
    • including 10 weeks X [2~3 hours/week videos + 3~6 hours/week exercises]
  • Note: this course covers a subset of the statistical and modelling principles from the Data Analysis with R course above, but the overall level is more advanced. I enjoyed this course the most.

Advanced (you follow the drift from above)

Advanced = Experienced.
This is true for programming, analytics, and learning any foreign languages.

"Just do it", is how you get experienced.

There is no course on this stuff (i.e. being advanced), not without a PhD _plus_ years of field work.

My best suggestion is use your curiosity. Find a problem. Dig into it.

Plus, work with other people that are really good.



Happy learning!


Tuesday, August 27, 2013

More MOOC on Analytics - Coursera

A hoard of analytics related Massive Open Online Courses (MOOCs) are about to start in September. Have your pick on what to learn. Having taken a few Coursera courses now, I would recommend 1) not taking too many courses at once, however tempting it is to sign up to all of them, unless you have no other work or projects on the go. This is just to make sure you have a reasonable load and able to devote enough of your attention to learning the material properly. 2) Make good use of the discussion forums, as they are both a good source of clarifications and a window into other people's perspectives on the material. 3) Do the exercises, programming assignments and quizzes to ensure your understanding of the material.

Linear and Integer Programming
Starts 2 Sept 2013, 9 weeks, 5-7 hours/week
(the basics of mathematical optimisation, a core toolkit in the field of Operations Research)

Statistics One
Starts 22 Sept 2013, 12 weeks, 5-8 hours/week

Introduction to Recommender Systems
Starts 3 Sept 2013, 14 weeks, 4-10 hours/week

Computing for Data Analysis
Starts 23 Sept 2013, 4 weeks, 3-5 hours/week
As I've written before here.

Web Intelligence and Big Data
Starts 26 Aug 2013, 12 weeks, 3-4 hours/week

Thinking Again: How to Reason and Argue
Starts 26 Aug 2013, 12 weeks, 5-6 hours/week
Perhaps a bit off topic, but perhaps not, since all analytics are more or less rooted in proving or disproving arguments, so we better learn how to do it well.


Related article:
Coursera and the Analytics Talent Gap
Starting up in Operational Research: What Programming Languages Should I Learn?

Monday, July 29, 2013

Learn R with Coursera for Data Analysis

Heads up: the Computing for Data Analysis course is running in September 2013.

It will teach you the R language for data analysis. The course is described as:
This course is about learning the fundamental computing skills necessary for effective data analysis. You will learn to program in R and to use R for reading data, writing functions, making informative graphs, and applying modern statistical methods. 
In this course you will learn how to program in R and how to use R for effective data analysis. You will learn how to install and configure software necessary for a statistical programming environment, discuss generic programming language concepts as they are implemented in a high-level statistical language. The course covers practical issues in statistical computing which includes programming in R, reading data into R, creating informative data graphics, accessing R packages, creating R packages with documentation, writing R functions, debugging, and organizing and commenting R code. Topics in statistical data analysis and optimization will provide working examples.


Related article:
Coursera and the Analytics Talent Gap
Starting up in Operational Research: What Programming Languages Should I Learn?

Thursday, May 31, 2012

Consistent Education Divide in Cities

The Daily Viz brought this to my attention. It's a visual by the New York Times showing how the distribution of cities by proportion of adults with college degrees has changed over the last 40 years.

Nicely formatted and presented, though my ability to compare the distributions side-by-side is a little bit limited.

The key story that this visual is telling is that the average has moved from 12% to 32%, but that the number of cities more than 5% above or below the average has increased substantially. "College graduates are more unevenly distributed in the top 100 metropolitan areas now than they were four decades ago." But i'm not sure if it's as simple as that.

Suppose I was measuring trees. One species was 10 feet tall on average and species two was 100 feet tall. If the first tended to vary between 7 feet and 13 feet, but the latter tended to vary from 85 feet to 115 feet, I wouldn't remark at how much more variable these trees were. For species one, no tree was more than 3 feet from the average, but in species two, presumably many are. Is this a sign that species two is more unevenly distributed? Not really. Species one varies up and down by 30% where two does so by 15%.

So I asked myself, given that the average proportion of adults with college degrees has nearly tripled to 32%, has their variability increased proportionally? Now that these trees are 32 feet tall, it seems strange to still measure their "unevenness" by how many of them are between 27 and 37.

So I reached out to a statistic, the Coefficient of Variation. Using my eyes to collect the data from the charts (so not precisely the correct data), I calculate a coefficient of 0.25 in 1970 and 0.22 in 2010. The variation in the data as a proportion of the average has gone down in the last four decades.

Again, the NYT concludes that "College graduates are more unevenly distributed in the top 100 metropolitan areas now than they were four decades ago.", but I would argue that if anything they are slightly more evenly spread than before and not remarkably so.