Tuesday, May 19, 2015

RT @dan_steer: #ATD2015 #TU316 All the graphics you will ever need here http://t.co/knvKk5ad8O


from Twitter https://twitter.com/srini_venkat

The New York Stock Exchange (NYSE) has today announced the launch of a #bitcoin price index (NYXBT) http://t.co/nJLtcC4Jk4


from Twitter https://twitter.com/srini_venkat

RT @brainpicker: The psychology of writing and the cognitive science of the perfect daily routine http://t.co/bRBdEBDOwz http://t.co/4RB8OSA0wB


from Twitter https://twitter.com/srini_venkat

New artwork for sale! - "Sunlit House and the Eerie Storm Clouds" - http://t.co/bOCIjt7PDR @fineartamerica http://t.co/Nw7tgtGLIW


from Twitter https://twitter.com/srini_venkat

Sunlit House and the Eerie Storm Clouds


from Twitter https://twitter.com/srini_venkat

Saturday, March 29, 2014

When Correlation is Obvious do we need to Prove it?

As Talent Development and L&D professionals, when you show the Correlation between metrics from your Talent Development initiatives and the Business Performance Measures, many of you might have got this question...."How do you know that the (Talent Development) Initiative or the Training Program was the key contributor to the Business or Individual Performance Improvement".

They would also argue that Correlation is not Causation. So, we try different Isolation Strategies (i.e., to isolate the impact of Learning Intervention from other potential factors that might have also influenced) like "Control Groups", "Pre & Post Intervention Measurements", etc. While Isolation strategies definitely helps in proving the point....but there are many practical challenges in actually doing these isolation studies in many organizations because of the very agile nature of business or mobile (moving) workforce, virtual teams, contingent staff, etc.

So, having said that, should we step back and think whether we need to go to this length to prove that our correlation is right? If there a strong correlation between A & B for whatever time period that we study, should we still spend time, effort and resources to then again validate B is caused by A?

So, with this context, now I would like you all read this post titled "When to Act on a Correlation, and When Not To" in Harvard Business Review HBR Blogs by David Ritter.

I could not agree more with David Ritter. I am also curious to know what you readers think?

Monday, March 17, 2014

Measuring Learning or Measuring the Impact of Learning?

Recently, I saw this question in Quora. 

What is stopping companies from measuring learning today?

And here is what I think.

Most companies do measure Learning. 

But among those almost of them measure the "Efficiency" - ie., how many people learnt, cost, no of hrs of learning, etc.

They also measure the Learner Reaction or Satisfaction which basically Level 1 of the Kirkpatrick's 4 Levels of Evaluating Training Evaluations (Refer: Evaluating Training Programs: Kirkpatrick's 4 Levels)

There are also quite a few organizations, that measure the "Effectiveness" of Learning ie., Knowledge Gain, Retention, Learning Transfer on the Job, etc.

But there are only very organization, that measure "Learning Outcomes & Business Impact". While measuring the Learning Outcomes and reporting on the Business Impact of Learning is difficult...it is still doable. If I have to state it in very simplistic terms... 

a) Agree on expected Outcomes with the Business Stakeholder 
b) Identify the Observable Behaviors that will create this Outcome and ensure Stakeholders are in agreement on that 
c) Determine the Metrics & Measures to track that Behavior - and make the Stakeholder to be observer of that Behavior Change 
d) Report on the Behavior Change and correlate it with the Business Performance/Outcomes 
e) Compare the Actual Outcomes with the Expected Outcomes


Check out this interaction in Quora at this link  http://t.co/9dL7CPrUwD


Framed Prints for Sale