Your browser does not support JavaScript! Scrum & Big Data - Scrum Inc.
  • LinkedIn
  • YouTube
  • RSS
I’ve been reading Viktor Mayer-Schoenberger and Kenneth Cukier’s stimulating new book Big Data: A Revolution That Will Transform How We Live, Work and Think. The premise of the book is, now that civilization has the tools to both collect and analyze huge amounts data, correlation is far more telling than causality. Meaning, that when dealing with billions of data point rather than hundreds, knowing the what, rather than why is good enough.
One example the authors use frequently is how Google was able to comb its massive trove of search queries on common flu symptoms to discover where the 2009 H1N1 flu epidemic was spreading.
The CDC was doing the same thing using traditional sampling techniques. Google’s method was more accurate and in real-time (two weeks ahead of the CDC survey,) a huge advantage in controlling the outbreak. Thanks to Google, the CDC learned thewhat (ironically the where in this case) but not the why (which H1N1 carrier was the vector.) And that was good enough to help stem the spread of the virus.
Despite the messiness of Google’s data, it was far more effective because of its size. Big data analysis was made possible by easy access to Google’s search engine (3 billion searches a day,) large servers to store the information and clever algorithms to sort the data into something meaningful.
A corner stone of Scrum is its ability to measure work output i.e. Velocity. As the authors ofBig Data point out, much of human knowledge is based on the ability to measure a given phenomenon. Once we can measure it, we can start to manipulate the input and determine if we’ve improved something by the resulting output. (Doing this again and again is continuous improvement, the impetus of Scrum.)
Because Scrum has made work measureable in more accurate ways than ever before, we could digitalize the metrics and create a huge searchable data set. For example, Microsoft has had over 3000 Scrum team members for several years. Imagine the possible insights if all that data were pooled and subjected to smart algorithms. Or, if companies that build and maintain virtual Scrum boards started storing all Sprint data from every client using their tools.
Perhaps we could see that adding a new team member results in a temporary drop in productivity but an overall long-term gain. Managers could hold off adding a team member before a key product release. Or perhaps the data might show that teams were more productive when working only 6 hours a day instead of eight. The possibilities are really exciting.
By using big data techniques, no longer would thought leaders in the Scrum community have to conduct case studies or theorize about what might create a process improvement. Nor would Scrum Masters need to tweak their process and wait until the end of the Sprint to see what the results were. Rather, they could simply query the data set and get an answer immediately.
Big Data is a big deal.
 -- Joel Riddle
en_USEnglish
Shares