How do we handle and use such enormous amounts of data?

How do we handle and use such enormous amounts of data?

The Command Line Heroes podcast tells us why our big data questions require scalable, open answers.

computer servers processing data
Image by :

Subscribe now

Get the highlights in your inbox every week.

How many gigabytes of data did we (the people of Earth) create yesterday?

...brain. is. thinking...

More than 2.5 billion!

And it's growing. Yes, it's hard for us to wrap our human brains around it. So, the question the Command Line Heros podcast deals with this week is: How do we handle and use such enormous amounts of data?

More and more data is mind-boggling but also exciting. We have the potential to do so much with so much data... with computers doing the hard work.

Quick history lesson

A long, long time ago: Scribbles on clay tablets.

1450: Information revolution with the printing press.

Today: Data at the speed of light with computers and the internet.

How do we put "the flood of data" to work?

Everyone today is thinking with what is called "a data mindset" -- from doctors and developers to Starbucks baristas. It's the world we live in.

So, when people are searching for answers, like to the question of whether someone is likely to get cancer or not, we should be able to search the enormous amounts of data we have for clues.

The roadblocks

  1. Access to the data.
  2. A way to process and read the data.

We need scalable, open data systems. You probably understand the scalable part, but why open? First, to keep infrastructure cheap. Second, to move fast. There's a lot of friction caused by sprinting towards a goal.

Learn more about a new open source, container-based imaging platform called chRIS on the podcast.

About the author

Jen Wike Huger - Jen Wike Huger is the Community Manager for Catch her at the next open source virtual event, or ping her on Twitter. She lives in Raleigh with her husband and daughters, June and Jewel.