Impact Lab

Subscribe Now to Our Free Email Newsletter
April 5th, 2016 at 7:50 am

If You Want To Learn To Code, A Degree Might Be a Huge Waste Of Time


Unless you get a  degree from Stanford or MIT, it will mean a lot less than having built your own apps to show off. Most Universities struggle to keep up with changing technology, and it will only boost your pay for the first 1-3 years. After that, those who are self taught can catch up to the experience that you get from a University degree. So unless you like wasting your time and money…

NOTE: For anyone wishing to own their own career in computer programming, check out the upcoming courses at DaVinci Coders. New classes starting soon.

I’ve talked about this a lot, but I’ve never written a detailed, data-backed blog post on the topic. If I’m gonna make a strong claim like that, I’d better back it up.

Here’s reality. (Data from the 2016 Stack Overflow survey of 56,033 coders):

  • Mentorship programs have a stronger correlation with higher pay than a university degree
  • There is less than 1% difference in pay between masters degree holders and bootcamp graduates
  • 69% of working software developers are self-taught
  • 43% cite on-the-job training as their primary learning resource
  • 25% used online courses
  • Only 19% have a masters degree related to CS
  • Only 8.5% have a B.A. in CS
  • 6.5% graduated from a Bootcamp (this number is growing fast)

When it comes to software engineering jobs, “or equivalent experience” is expressed or implied about 96% of the time. (About 4% of developer jobs require advanced math and science work).

After 3 years on the job, and a track record of building great products,nobody cares anymore if you went to school, and for that reason, the pay difference evaporates as you pick up more experience.

A college degree will earn you a few thousand dollars more per year, but only for the first 3 years. After that, it doesn’t make any difference at all.

A Degree Won’t Open More Doors than Faster, Less Expensive Options

The only thing employers really want to know about your education is whether or not you know how to code. All evidence of that will be considered (including a degree if you’re junior), but employers have a strong preference for proof of skills in real code, not a piece of paper.

The best way to learn to code is to code. The best way to prove you can code is to code.

Degrees Give You A Good CS Foundation in Theory

Yes, university CS does give you a solid foundation of algorithms, data structures, and computer science fundamentals. That is absolutely true, and it can certainly be valuable. You’ll get a much stronger foundation in theory.

The key here is in theory.

The trouble is, most universities don’t help much at all with actual software engineering. Engineering is about applications, not theory.

Most universities teach a variety of well-known cookie-cutter algorithms, many of which are not commonly used in modern programming languages because better alternatives are built into the language or standard libraries.

What students really need to learn is how to solve problems with their own brains, rather than studying solutions out of textbooks whose first editions were written 30 years ago when applications were much, much different than they are today.

When it comes right down to it, data structures and algorithms are really about finding performant solutions.


Higher Ed and Sort Algorithms

Intro CS courses are obsessed with teaching sort algorithms. You’ll probably spend the better part of a whole semester on them. Is that because you need to know 6 different sort algorithms to choose the best one?

No. Absolutely not. It’s because there are lots of different sort algorithms with wildly different performance characteristics, which makes it an interesting study in performance profiling, and a great way to learn about big O notation — a way to understand the performance characteristics of an algorithm.

The problem is, while big O notation is useful, you can sum it up with some simple intuition: More work = worse performance:

  • Iterating and operating over large lists is slow
  • Increasing the number of iterations will slow things down
  • Multiplying the number of iterations will slow things down a lot
  • Multiplying iterations by orders of magnitude will make things crawl

Here’s the catch: Unless you’re talking about very large collections, or very bad algorithms, teaching students that network, disk, and render are expensive is much better than teaching them 6 different sort algorithms which are all slower than the built-in `Array.prototype.sort()`.

I know it’s useful to study existing algorithms and compare them, but really… this is ridiculous:

  • CMU
  • Rutgers
  • MIT
  • MIT again…
  • More MIT…
  • Will it ever end?

Yes, they’re teaching a lot of different lessons with all those sorting algorithms and different data structure strategies, but by the time the student gets through all this and they finally get to the trusty quick sort, they’ve run out of attention. They miss the point, and they’re left wondering:

“If quick sort is so much faster than merge and heap sort, why did we learn merge and heap sort to begin with?”

And that is a totally legitimate question.

The Real World

Meanwhile in the real world, it turns out that none of those algorithm choices will make anywhere near the performance impact as conserving your network or disk access, lazy evaluation or streams.

And when you get to web scale and reach collection sizes where algorithm efficiency is really important, network, disk access, and streaming throw a big monkey wrench in analyzing the performance of your solutions — and the feasibility of any algorithm that relies on shared memory in the first place.

When it comes to real applications, nothing is really as cut and dry as the theory describes.

By concentrating on theory at the expense of concentrating on applications, students get a warped idea of performance profiles which doesn’t take the physical realities of modern computation into account. Students learn to obsess over performance characteristics of things that don’t make any practical difference to the performance of real apps.

The good news is, you can skip some of the less valuable crap from CS courses and learn about concurrency, streams, working with machine clusters, and lazy evaluation instead of implementing yet another textbook, impractical, shared-memory sort algorithm that’s already been implemented 10 million times, and is optimized for long-obsolete 1980 machine architecture.

Don’t You Need to Know Lots of Data Structures and Algorithms?

Universities love sort algorithms. The trouble is, there’s really only one sorting algorithm you need to know anything about: the one built into your language or standard library, and what you really need to know about it is the API.

The fastest sorts for in-memory, general case sorts are built into the languages and standard libs, and other conditions are much more important for performance in almost all applications (network, disk access, lazy vs eager evaluation, etc…).

I have only needed to use an alternate sort algorithm once in the last 20 years. Chances are, you never will.

For almost all cases, if anything, I would consider knowledge and conscious consideration of lots of sort algorithms a distraction from most real world application performance issues. “The Paradox of Choice” is relevant here:

The bottom line when it comes to performance is, profile first. Test, and find the parts of your app that really are bottlenecks, and concentrate on fixing those.

“Premature optimization is the root of all evil” ~ Donald Knuth

Read a basic summary of big O notation and get an understanding of the fundamentals.

As you’re working, pick the first algorithm that springs to mind as a good solution, and just use it. Only fix it if you prove with tests that it needs to be fixed.

This might sound scary at first, but with practice, you’ll tend to pick the right algorithms for the job automatically, without consciously calculating big O. The same way you throw a ball without sitting down with a calculator first.

This is how programming works in the real world. A great developer will rarely need to consciously think about it, the same way that a great musician never consciously thinks about the notes — they feel them, and the notes play themselves.

When you profile and find a hotspot that needs to be dealt with, you can use Google to help you identify good alternatives, and you can base your choices on the actual machine architecture of your real application, rather than the advice of a text book author from the ghost of CS past.

University curriculum is optimized for a pre-Google era.

Paid University Programs Are a Terrible Value

A university program happens to be a really bad deal, both financially, and in terms of the value of the curriculum.

My primary motivation for this rant is that in the US, average college tuition is $18,943 per year. After university, students land in entry-level programming jobs, and earn up to about $10k more than those without a degree, but only for the first 3 years, meaning that their degree buys them about $30k after spending $76k, which means they have a NET LOSS of $46k. Those are just averages.

Many US universities will happily rob you of hundreds of thousands of dollars that you will never earn back on the job.

Obviously, the situation isn’t as bad in many other countries where university educations are free.

Even so, good university CS curriculum is a rare treasure. Good software developers never stop learning. I’ve been observing and partaking in university curriculum from the best schools in the business (such as MIT) since the beginning of the OpenCourseWare program.

I’ve also looked at many other university programs. I know what the schools are teaching.

The sad reality is that most concentrate on impractical, outdated theory primarily in C/C++ or Java. Maybe Lisp, as a curiosity. Some will have trivial intros in Python or JavaScript.

Sadly, failing to introduce you to a variety of programming paradigms such as functional programming and prototypal inheritance may teach you bad habits like classical inheritance and poor OO design that can actually hurt your chances of building quality software and landing a great job, rather than help you.

For this reason, a university CS degree can actually be a red flag. I never hire fresh university graduates without great code samples and in-depth interviews to make sure they understand some important fundamentals.

In spite of my feelings about university curriculum, there is some great stuff that’s hard to find anywhere else:

  • Erik Meijer’s functional programming lectures
  • Stanford’s Machine Learning course
  • Coding the Matrix: Linear Algebra through Computer Science Applications

The bad news for students who’ve already paid tuition is:

  • Chances are, random school x does not have courses of that caliber in their course catalog.
  • It’s hard to find all the great courses you’ll want to take at any one university.
  • You can take all of those amazing courses online for free. Check

The Value of Networking

The real value of that stunningly overpriced university program is from the people you meet — but the quality of networking opportunities in university programs is distributed very unfairly. If you happen to be at Stanford, that networking will be extremely valuable.

If you happen to be at some random state college or university without the ivy-league connections and direct feed into the Silicon Valley business incubator and investment ecosystem, you could get better networking for the price of a $4 coffee in any SOMA, SF coffee shop.

Not sure where to meet people? Try

What Should You Do Instead?

  • Just start coding
  • Find a great mentor
  • Find a bootcamp
  • Online learning
  • Meetups
  • Blogs, books, etc…

The Exceptions

If you can find a really good university program and it will cost you little or no money, by all means, take advantage of that. My anti-university rant is mostly about universities in the US who will happily eat up hundreds of thousands of dollars in tuition money that you will never earn back on the job, and then fail to teach you the things you really need to learn.

Cutting Edge Research & Science Jobs Favor Degrees

There are a few specialties where degrees really do make a difference, and the pay is better than average. The positions account for ~4% of all developer jobs, according to Stack Overflow.

Specialists in machine learning, data science, biological science, and quantum computing are more likely to be highly educated. Developers of augmented reality platforms will need advanced mathematics, as well.

You absolutely can learn this stuff online, but a degree will lend you the additional credibility needed for the roles. If you have a passion for one of those topics and you know that’s where you want to work, get a degree.

If you’re not genuinely interested and passionate about your field of study, and you lack a natural aptitude for math and science, you may wash out and waste your investment.

We can’t all play in the NBA.

Luckily, you don’t need as much training to build apps on top of those platforms after they’ve been built, but if you want to pioneer the next Oculus Rift or Tesla autopilot, find a university with a track record of producing related startups.

There are many ways to multiply your earnings potential aside from an advanced research degree, but you’ll need other kinds of skills, an entrepreneurial streak, or some good luck.

The Bottom Line

If you want to build the next Tesla, Oculus Rift, or AI Go champion, and you have an uncommon natural aptitude for math and science, get a degree.

If you want to get a great job building great apps with the latest hot tech stack, you don’t need it. It’s a huge waste of time and money.

Article via:
Photo via: davincicoders


Comments are closed.

Colony square3