How do we handle and use such enormous amounts of data?

The Command Line Heroes podcast tells us why our big data questions require scalable, open answers.
213 readers like this.
computer servers processing data

Opensource.com

How many gigabytes of data did we (the people of Earth) create yesterday?

...brain. is. thinking...

More than 2.5 billion!

And it's growing. Yes, it's hard for us to wrap our human brains around it. So, the question the Command Line Heros podcast deals with this week is: How do we handle and use such enormous amounts of data?

More and more data is mind-boggling but also exciting. We have the potential to do so much with so much data... with computers doing the hard work.

Quick history lesson

A long, long time ago: Scribbles on clay tablets.

1450: Information revolution with the printing press.

Today: Data at the speed of light with computers and the internet.

How do we put "the flood of data" to work?

Everyone today is thinking with what is called "a data mindset" -- from doctors and developers to Starbucks baristas. It's the world we live in.

So, when people are searching for answers, like to the question of whether someone is likely to get cancer or not, we should be able to search the enormous amounts of data we have for clues.

The roadblocks

  1. Access to the data.
  2. A way to process and read the data.

We need scalable, open data systems. You probably understand the scalable part, but why open? First, to keep infrastructure cheap. Second, to move fast. There's a lot of friction caused by sprinting towards a goal.

Learn more about a new open source, container-based imaging platform called chRIS on the podcast.

User profile image.
Jen leads a team of community managers for the Digital Communities team at Red Hat. She lives in Raleigh with her husband and daughters, June and Jewel.

Comments are closed.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.