20 years ago, my wife was told she couldn't get pregnant. Since then, she has delivered four children.
What are the common traits between this example, a radiologist assessing breast cancer, a tuberculosis specialist, a credit underwriter, a risk manager, a fingerprint expert, a psychologist, a judge… etc.?
According to my understanding of the book Noise, it could be simplified into three points:
- They might make mistakes when making a judgment (clearly the case with my wife).
- They might judge the same case differently when presented at different points in time (say, three months apart — not the case for my wife, but not uncommon).
- The judgment between Expert A and Expert B on the same case could differ dramatically.
This raises a key question: when experts, trained extensively in their fields, show such inconsistency in their outputs (what the book calls "noise"), it's very likely — as also pointed out in the book — that assessments within corporate organizations will suffer even more when it comes to decision-making on people's careers. I’ve personally witnessed that corporate assessments, including OKRs, were often misused or poorly understood… at best.
This is likely to become one of the major change management challenges as AI gets introduced more aggressively into day-to-day processes. Therefore, a framework that analyzes not only how jobs might change, but also how individuals might leave their roles — because the job simply became too boring — could be one solution.
I experimented with a prototype to estimate the likelihood of employees either being displaced or choosing to leave. It was structured in an agentic way:
- Capture and structure employee profiles: roles, responsibilities, current salary, job descriptions.
- Track how job titles evolved over time *within* the company: required experience and education, key skills added or phased out.
- Analyze how job requirements and expectations changed year over year.
And then:
- Estimate each employee’s likelihood to leave.
- Estimate each role’s likelihood of being displaced.
Interestingly, "noise" — variability in judgment — also shows up here. Depending on the type of AI you use and how you use it, the output can vary significantly (I've made a post about it, see in comments).
Still, the prototype — despite its flaws — offered an interesting foundation to support change management by providing insights into why an individual might leave or be displaced. It could help prepare upskilling plans or guide team reorganizations.
This suggests that a significant amount of data will be needed, which could potentially overwhelm everyone. Delegating judgment to AI will likely hit some limitations — which is why using human intuition as the final step in the process could make a meaningful difference.
This is where machines and humans will need to collaborate the most.