How many gigabytes of data did we (the people of Earth) create yesterday?
...brain. is. thinking...
More than 2.5 billion!
And it's growing. Yes, it's hard for us to wrap our human brains around it. So, the question the Command Line Heros podcast deals with this week is: How do we handle and use such enormous amounts of data?
More and more data is mind-boggling but also exciting. We have the potential to do so much with so much data... with computers doing the hard work.
Quick history lesson
A long, long time ago: Scribbles on clay tablets.
1450: Information revolution with the printing press.
Today: Data at the speed of light with computers and the internet.
How do we put "the flood of data" to work?
Everyone today is thinking with what is called "a data mindset" -- from doctors and developers to Starbucks baristas. It's the world we live in.
So, when people are searching for answers, like to the question of whether someone is likely to get cancer or not, we should be able to search the enormous amounts of data we have for clues.
The roadblocks
- Access to the data.
- A way to process and read the data.
We need scalable, open data systems. You probably understand the scalable part, but why open? First, to keep infrastructure cheap. Second, to move fast. There's a lot of friction caused by sprinting towards a goal.
Learn more about a new open source, container-based imaging platform called chRIS on the podcast.
Comments are closed.