A (perhaps non-accidental) failure to distinguish linguistically between reason (dianoia, ratio) and intelligence (noûs, intellectus) has given rise to the term “artificial intelligence” which consequently engenders misunderstanding if not indeed fear. In keeping with the original meaning of the words in question, the term “artificial intelligence” should de jure be denominated “artificial reason,” a correction which might help to resolve a great deal of confusion and perhaps even forestall disaster (Bérard, 2018).
The proper outcome of using AI to increase the predictive power of measuring a job applicant’s knowledge, skills, and abilities as well as their psychological characteristics is a significantly more accurate hiring process. In fact, without the use of “artificial reason,” the best hiring managers typically can hope for is increasing the quality of hire by about 10%. Unfortunately, that means that the inaccuracy inherent in the system is about 90%.
When “artificial reason” is adopted, for example, an evolutionary algorithm that “learns” as it goes cycles through from billions to sextillions (a one with twenty- one zeros after it) of potential custom models for future upload into the company Applicant Tracking System. Computer power efficiently raises the quality of hire up to 90%. This means that for those factors being measured and those variables Key Performance Indicators being predicted, error is reduced to about 10%.
Another benefit of engaging “artificial reason” in the process of customizing your businesses’ ideal talent profile is overcoming the fact that most local validation studies have quite small samples. Ensemble sampling techniques (bagging, boosting, and random trees) create ten or more models which are then consolidated into a hyper accurate model. The statistical power to distinguish between good hires and poor hires in the workplace is not a pipedream. In the end, custom talent profiles may be deployed over larger geographical areas with less angst.
When workplace data scientists leverage “artificial reason” to create data mining models, companies lower the cost of sourcing high-quality talent. The method to build a data mining model requires excluding those factors which have little impact on predicting employee performance, retention, workplace safety and engagement. False biases about how many years of experience one needs or what type of academic degree one must have to be considered for employment might fall away completely. Many of these demographic factors are recognized as an outdated and deficient way of screening in candidates.
Through advanced people data science, pre-hire assessment results captured in the job applicant process or employee onboarding process become many times more valuable for one-on-one employee business coaching. No longer will the manager discuss generic, one-size-fits-all coaching comments. Statistical insights elevate the game and customize it to the person. In a recent data science analysis, out of five below average salespeople, one salesperson had the potential to become an above average performer while three could move up to average performance.
Future-looking Assessment Publishers will not delay upgrading the scoring algorithms they use to create Overall Job Match and Fit Scores. Such algorithms should be transparent, elegant, and simple. The days of byzantine paths toward breaking ties and weighing certain factors add little predictive accuracy and do not stack up against more rational scoring methods.
2. A meta-analysis by Frank A. Bosco, Herman Aguinis, Kulraj Singh, James G. Field, and Charles A. Price published in the Journal of Applied Psychology, 2015, Vol. 100, No. 2, 431-449, table 3 shows that at the 80th percentile of effect sizes, one can expect a correlation between performance and psychological characteristics of 0.31. Squaring that correlation renders about a 10% explanation of employee performance.