A behind the scenes look at Exercism for improving coding skills

Since its launch, the community has contributed 70 exercises and 50 programming languages to Exercism.
433 readers like this
433 readers like this


In our recent article, we talked about Exercism, an open source project to help people level up in their programming skills with exercises for dozens of different programming languages. Practitioners complete each exercise and then receive feedback on their response, enabling them to learn from their peer group's experience.

Katrina Owen is the founder of Exercism, and I interviewed her as research for the original article. There are some fantastic nuggets of information and insight in here that we wanted to share with anyone interested in learning to programming, teaching programming, and how a project like this takes contributions like this from others.

Exercism looks great! How many people are using it?

There are 125,000 people in the database. Of those, since the project launched in 2013 about 29,000 have actually submitted solutions and 15,000 have commented on solutions. And 1,000 people have contributed to the project on GitHub. Exercism has 33 active language tracks, and 22 language tracks that are in the works or have been requested but aren't yet begun.

According to Google Analytics we get about 50,000 visitors a month, 12,000 a week, and they come from 201 different countries.

Are any learning institutions using Exercism?

Yes, apparently TONS! I randomly ran into a French professor who said he used Exercism with his OCaml students. A lot of the bootcamps use it, including Fullstack, Turing, gSchool, IronYard, Ada Academy, Flatiron School, DaVinci Coders, General Assembly, and probably others. I don't know how they use it officially, though. Is it part of the curriculum? Or just something they suggest to the students. I don't know. One guy from the IT department at Yale said he was running a summer programming class using it.

Are all the exercises community contributed? What are the criteria for a new exercise?

Yes. I created or sourced the first 30 or so and implemented them in Ruby, then the community contributed another 70 exercises, and they since have contributed implementations in like 50 languages.

Each exercise has to be a fairly trivial problem that doesn't touch real-world technologies (no web servers, no databases, etc.).

An exercise doesn't need to teach a specific concept. We're aiming for practice more than anything, so we need to give people enough things to practice without them getting bored. We've deprecated several of the early problems that I made up because they were too similar to each other. Any new exercises should be different enough from existing problems.

Sometimes languages have features that are hard to learn, so we provide exercises that let us introduce one feature at a time in that language. Afterward other languages will implement the exercise as practice, and not because it's about a specific language feature.

Learners can't move on to the next exercise until they complete the current one. What does "complete" mean? Just submitting it?

They have to submit a solution, and at that point if they "exercism fetch" again, they'll get the next exercise. So technically you don't really have to learn anything to move on, you could submit a comment, or you could skip the exercise entirely (we have a command for that in the command-line client).

The soul of Exercism is discovery and collaboration. It's a journey versus destination sort of thing. Exercism is at its most valuable if you slow down and explore. People who act as though the point is to get through as many exercises as quickly as possible get a lot less out of it.

We haven't introduced gamification, because that tends to encourage the wrong type of behavior (gaming the stats, rushing through). Not to say that some subtle or well-thought-out forms of gamification couldn't be a great addition to the product, but it is dangerous ground where getting it wrong can be disastrous, and it is safer to avoid it altogether.

There are many ways of slowing down and getting more value out of the process:

  • Try different approaches and solutions to the same problem
  • If you use a standard library function, read all of the documentation for it.
  • If you aren’t able to explain how something works, go learn enough to write a blog post about it.

Some people move on, do other exercises, and then come back later to try something else. Does a submitter have to wait for someone to review their submission to submit an updated version?

No. Many people will submit, poke around in the other solutions to see what other people did, learn from that, and then submit a new solution. Reviewers will sometimes look at three iterations, and then comment on the last one saying "I like the progression here" or "I think your second iteration was a lot more readable than this one" or whatever makes most sense.

For the whole process to work there must be both submitters and reviewers. Have you had any problems with finding people to perform the reviews and provide feedback?

Yes, it is a huge problem. Giving feedback is much harder than writing code.

I think there are three reasons why people might not give feedback:

  1. What's in it for me? People might not think they'll get anything out of it, but it's a powerful learning experience. You're forced to articulate gut feelings and to examine your assumptions and habits and biases. When people disagree with you, especially smart people, you start assuming that what they're saying makes sense, and then you wonder what must be true for them based on their experience that has led them to come to this conclusion. So you learn a lot more from giving feedback than doing exercises. But you'll never know that (or believe it) until you experience it yourself, and to experience it you need to get over the barrier of actually doing it.
  2. It's scary. You're really putting yourself out there even more than when you're writing code. A lot of people are afraid they'll be laughed at or say something stupid. A lot of people on the site are quite new to programming and not just new to the programming language in question. It's easy to think "What do I know, anyway?" or "Who am I to tell other people what's better?" The secret to overcoming this hurdle is to just go in and make observations and ask questions. "Oh, I didn't know you could do XYZ, neat!" or "I don't understand how this thing works? Can you explain?" or "It took me a really long time to figure out what these three lines were doing. "Is there a way of making it more readable? or "Is this just an idiom that I need to get used to?"
  3. A lot of people look at code and just see... code. It's hard to know what to say or what to look for. We don't have a lot of training in recognizing code smells, and so we can stare at code and not really have any idea what to say.

I see two ways of fixing the problem of not enough feedback.

The first is to provide learning materials about how to give feedback.

I did an experiment about a year ago where I put together a "code smell lab" for one of the exercises in the Go track. It had about 12 different solutions to the same problem, things that people typically do, and then for each example it asked the reader to think about what they might point out in the code. Following that, it had a number of comments from other people, things that people would typically say on one of these solutions. It asked the person to consider whether they agreed, compare it to what they had thought of, and then it discussed in detail the things that you might talk about, including pointers to resources about each thing.

It was a lot of work, but it was worth it. Several people used it to get comfortable giving feedback, and continued to give feedback on the entire track, not just on that exercise.

The second thing we could do is machine learning on the solutions. When you submit a solution we should be able to tell you, "Hey, this is really similar to these other things, and here are the comments that were submitted to those other things." Also, we should be able to tell you, "Go look at these solutions that are really different from yours." Then they could look at those solutions, learn from them, and start conversations.

Have you had to deal with any behavioral or social problems on the site?

Surprisingly little. A couple of people get really intense and and respond or give feedback to everything, which can be a bit like someone crowding your space if they always need the last word.

A lot of people are unintentionally curt. Some people will be super directive: "Do X!" "Y is WRONG!" Of course, there is no right or wrong. There are trade-offs. There are reasons why you probably wouldn't want to choose Y. It makes for a much more interesting conversation to say "In my experience XYZ" and "I see a lot of people doing PRQ, and here's a blog post," or "Here's some documentation." So, we could all learn a lot about having conversations about code. It also depends on the learning goals of the person submitting the code. Proficient programmers who are just ramping up in a language, will need a different type of feedback than those who are learning programming for the first time. Also it really depends on personality style.

Wired called Exercism the "site that teaches you to code well enough to get a job." What do you think about that?

Exercism is based on the idea that you can develop a high degree of fluency even at a low level of proficiency. In other words, you can be extremely fluent in the basic syntax, idioms, conventions, and standard library of a programming language, without being able to solve real-world problems in that language.

This is key: I specifically aimed Exercism at a level of proficiency that will not get you hired, unless you're already a programmer and know how to do real-world things.

This whole fluency and proficiency thing is useful, because it frees up cognitive resources. Once you have the fluency at the basic level, improving your proficiency and then ratcheting up your fluency is much easier.

Fluency is described by the Language Hunters project as, "What you can say when you’re woken up in the middle of the night with a flashlight in your face."

The Language Hunters describe four levels of proficiency where you have the ability to express and understand:

  1. Very simple, concrete ideas; e.g., "good music"  or "drink"
  2. Simple, complete sentences; e.g., "How do I get to the party?"
  3. More complex, descriptive language; e.g., telling a story about what happened at the party.
  4. Complex political, social, economic topics; e.g., "Should parties be illegal?"

Now bring this back to the three categories of people who typically participate on Exercism:

CodeNewbies: people who are learning to program for the first time.

  • Exercism is not for complete programming beginners—the site assumes that students have used other resources to grasp the very basics.
  • The language they’re learning is incidental, but I think it is typically either Ruby, Python, JavaScript, Java, or C#.
  • They feel overwhelmed and often quite stuck. They’ve done hand-hold-y tutorials, but they can’t figure out where to begin to do things on their own.
  • Exercism provides lots of small wins and uncovers knowledge gaps, which they can go fill by using other resources.
  • The type of feedback that is most useful for them tends to be fairly Socratic, leading them to think about simplicity, clarity, and using the language well (basic syntax, data types, core library functions, etc.).

Polyglots: programmers who are ramping up in a new language.

  • These people usually need to learn the language for a new work project or to prepare for switching to a more interesting job.
  • They often feel frustrated because they’re used to being fluent in their usual languages and now suddenly they have to concentrate very hard on basic syntax.
  • Exercism lets them gain the fluency in the syntax quickly, and they’ll then be able to jump into the real-world stuff elsewhere with ease. Bonus: they get familiar with testing in the new language.
  • The type of feedback that is most useful to them is about language idioms and conventions, and sometimes things like readability and clarity.

Artisans: programmers who are diving deep into their primary language(s).

  • Typically this is someone who will spend hours with a thesaurus trying to find the right name for things.
  • They tend to be thinking about code smells, complexity, legacy code, refactoring, code review, naming things, etc.
  • Exercism gives them a safe place to explore these things and a community of people who care.
  • They don’t want feedback so much as a deep and nuanced conversation about design principles and ideas.

Both CodeNewbies and Polyglots are aiming for that high level of fluency at a low level of proficiency. However, for CodeNewbies the path to getting there is somewhat longer, because they also need to learn the basic programming concepts, not just the syntax, libraries, and conventions.

The Artisans don’t quite fit into the high-fluency, low-proficiency model, but they’re crucial to the project. They tend to give excellent feedback, they’re often at a place in their career where they’re developing their mentorship and leadership (both technical-oriented and people-oriented) skills, and they often care about code review.

Any plans to translate Excercism into other human languages?

Not at the moment, but this has come up a few times. I've gotten a number of requests about Portuguese, and one for Korean.

The problem is that I haven't even gotten the user experience worked out yet, and after I get that worked out I want to work on improving the feedback that people get. Human translation is pretty far down the list.

You told Wired you'd like to raise money to pay folks to work on the site. Did that ever happen?

I've finally started doing that now.

I've submitted a grant application, and I've been talking to a company in the UK whose founder has previously contributed to Exercism and believes in the project. If I can raise enough, this company will give me a deep discount in order to do the design work that we need.

If we don't get the grant, I'm going to look into sponsorships. With so many eyeballs, I would like to think some of the publishing companies who produce learning materials for programming languages, might want to partner with me.

Did you ever get Rikki the Bot running on other exercises?

Yes. Rikki is giving feedback on several exercises in Go and is about to start giving feedback in Crystal.

This has to be a lot of work. Is there a core team who keeps things ticking?

Yes, they do a lot of work. We've got a ton of people involved in the language tracks. A small core team focuses on the command-line client, and a handful of people are actively involved in the website.

I think that the product side of things would be much more maintainable if we could get this redesign accomplished. One problem is that when the user experience is a mess, there really is no point in having gorgeous code. The website is really hard to contribute to. At the moment we have a bunch of developers who have no experience with design, making suggestions about design; it's not a very productive state of affairs.

What sort of thoughts are you having about what's next for Exercism?

1. Governance of the curriculum and language tracks
2. Sponsorships to fund ongoing work (development, workshops, marketing, etc.)
3. Site redesign
4. Workshop and meetup kits to bring Exercism into meatspace
5. Machine learning and AI (à la Rikki the Bot)

The first thing I'm working on is figuring out the governance for all of the curriculum content. I want to hand that off to the community. It's already mostly in the hands of the community, but I want to formalize it.

I've been thinking a lot about the health and status of Exercism lately. I've spent some time and effort thinking about how to make the project healthier and how to help the contributors and maintainers be happier.

In particular, I would like to create a task force (a short-term, time-boxed working group) with the explicit goal of recommending a structure for how to better manage the Exercism language tracks.

I'd like this task force to think about how to get to a point where all the active tracks have three or four active maintainers, along with suggestions for how to:

  • Find new contributors (using meetups, conferences, blog posts, workshops, lightning talks, screencasts, or something else entirely)
  • Mentor and encourage contributors in order to increase frequency of contributions
  • Nominate new language maintainers from the existing contributors
  • Make maintainers happier and more excited about Exercism (using custom swag, sponsorships so we can send maintainers to relevant conferences, anything else that might be neat or cool)
  • Help maintainers use their work with Exercism to grow their reputation or network (for example, if they want to speak at conferences, write blog posts, or produce other artifacts)
  • Roll off a project (i.e., to stop being an active maintainer on the language track)
  • How to signal that a language track is not actively maintained and how to turn a track on or off (i.e., maintained and unmaintained)

Ideally I'd love to see a way to make it so that every single language track is sustainably maintained, nobody is risking burning out, and we have enough contributors and maintainers so someone can move on and do other things completely without guilt when they're ready to.

The "workshops" thing is interesting to me. Ashley Williams, one of the core maintainers of Node, was at RustFest in Berlin earlier this year and did an "intro to Rust" workshop using Exercism as the curriculum. She said it went really well. People were super excited.

I want to try doing more of that.

VM Brasseur profile photo
VM (aka Vicky) spent most of her 20 years in the tech industry leading software development departments and teams, and providing technical management and leadership consulting for small and medium businesses.

1 Comment

"Of course, there is no right or wrong."

Hmmmm, so it is "trade-offey" rather than "wrong" to be traveling on the wrong side of the highway? Let's hope your local medical infrastructure is on the greener side of things...

To me, there are clear cases of "wrong", where things are wrong in a way that may lead to outright crashes in all situations/environments (and people simply haven't managed to carry out runtime tests properly/fully yet, thus simply failed to realize the problem i.e. their chosen "solution" being "wrong").

But yeah, "Do X!" "Y is WRONG!" is an unnecessarily harsh statement, due to being woefully abridged (any such statements need to be followed by a detailed explanation of the reason for why this is wrong - BTW this also applies to coding style guidelines documents which should back up their statements by detailed explanations as well).

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.