WC.com

Tuesday, January 24, 2017

Another AI Invasion, Meritocracy?

I recently wrote about Artificial Intelligence, or AI, and how it is taking over aspects of the hiring and management processes. As a gatekeeper for hiring managers, AI is sorting and ranking applicants. If you hope to woo a recruiter, it is increasingly likely you must first impress some computer somewhere. 

Recently, the British Broadcasting Company, BBC, questioned whether AI was the path to meritocracy, or more simply a merit-based society or sub-culture. In Beyond 'Brogrammers': Can AI create a meritocracy?, innovative and intriguing efforts by silicon valley to manage silicon valley are discussed. 

There is a perception of homogeneity in certain industries or societies. There is also empirical data that supports human beings have biases and prejudices, often indiscernible or even unconscious. In Silicon valley, there is a perception that hiring, evaluating, and promoting computer programmers is driven by factors other than performance. Some contend that human connections are more important than ability, and that new hires often "went to top schools" that they are "hired by their friends or former fraternity brothers" based upon familiarity and homogeneity rather than ability.

That bias exists may not come as a surprise to some. But, others contend we see it in the world around us in various forms, if we are observant. There are those who contend that consciously or not many humans' behavior is directed by a vast spectrum of assumptions and beliefs that we have, and of which we may not necessarily even be consciously aware.

An artificial intelligence developer in Silicon Valley contends that research supports the existence of hiring and promotion bias in the programmer field. Ms. Masood claims there is proof that "there was a certain type of programmer that would still move forward in interviews," to that person's benefit and the corresponding detriment of those who were culled from the pool through the process. That predictability, she says, indicates that bias is driving decisions, and has led to the characterization of them as "brogrammers." She says that the outcome (who is hired, what they look like, or their experiences) proves bias in the process. 

Her new AI is named "Tara, which stands for Talent Acquisition and Recruiting Automation." Tara is a reader of computer code. Like Neo (The Matrix, 1999), Tara sees the world as a series of ones and zeros, all facade and superficial appearance removed, just ones and zeros. And Tara reads the code created by the various programmers without regard to "biographical information such as age, race, gender or where you have worked in the past or where you went to university." 

By intentionally eliminating these demographic data points, the evaluation of programming quality is performed solely upon "the work they have produced rather than who they are or who they know." And, thus, is focused on the establishment of "a meritocracy." Advocates contend that this AI, and its evaluative methodology, will create opportunities for "smart and entrepreneurial" people, even though they did not perhaps go to the right schools or associate with the right clubs or groups.

Tara has implications across the programming field. It could be used to evaluate programmers seeking employment, much in the way other task-testing is used to rank hiring candidates (I still remember taking "typing tests"). Likewise, it could be used within a company to evaluate employees for retention, promotion, raises, etc. Many companies seek objective, measurable performance for such purposes. However, objective indicia of proficiency and productivity are often hard to find in management. 

The concept also implicates management more directly. With an algorithm monitoring and measuring output and performance, it is practical that fewer managers will be needed. It is also predicted that opportunities for remote working and the "gig" or "freelance" workplace are enhanced by these performance measure systems. If performance can be evaluated objectively through automation, it may facilitate a work environment that does not depend upon specific work hours, physical presence, or "supervision" in its traditional sense. Performance could be measured and payment made based upon Tara's receipt of your work and satisfaction with its quantity/quality.

And, it may enhance the potential for meritocracy in the workplace. While this may seem focused upon a single industry, and not of too much interest to others, it signals a metric-based evaluation process. If an algorithm can measure quality and quantity of code, why can it not measure the effectiveness and efficiency of brick-layers (automated or real), or police officers (miles patrolled, people engaged, premises examined, reports generated), or maintenance technicians, or . . . you get the point. For any occupation, performance criteria might be developed. Through wearable technology and GPS, data might be constantly collected, and through an algorithm performance might be evaluated, in a true meritocracy. 

Such a process might skew employment decisions away from bias, or even the potential for or perception of bias. It might address the current perceptions in various industries. In the programmer field that Tara is addressing, it turns out that "in the US, women held just 25% of professional computing occupations in 2015." And, "more than 90% of those women were white. Just 5% were Asian, 3% African American and 1% Hispanic." Does this data support that there is bias in the hiring and promotion processes? If decisions were made by an automaton like Tara, without access to demographic information, would hiring and retention shift workplace participation, and therefore opportunity?

Some contend that computer industry woes run deeper. They concede that the demographic picture is not diverse, but say the reason is "there aren't many trained female computer scientists to recruit in the first place." There is some empirical data to support the perceived lack of supply: "women earn 57% of all undergraduate university degrees in the US, (but) they account for just 18% of computer science degrees." Thus, some portion of the demographic picture may be seen as driven by supply.

Others might argue that supply is perhaps driven by demand, that is the lack of opportunity for women may drive disinclination of women students to seek those degrees. If a particular field is seen as not providing opportunities to you, would you be inclined to invest in an expensive education that leads to that field?

And, there is conjecture that companies will continue to hire and promote "people they feel comfortable with." In that context, it is perhaps predictable that hires will come "from similar backgrounds," and possess similar life-experiences. There also remains the reality that other characteristics may be more difficult to digitize and measure. Perhaps the best measure of a programmer is the volume of code created, but perhaps the best measure of a police office may be interpersonal skills, team leadership, or other more esoteric factors, less subject to objective measurement? And, that may be true for a variety of professions and occupations.

Perhaps Tara is a harbinger of our future, with performance measured by numerical analysis dependent upon a spectrum of metrics. But perhaps she is an anomaly, effective in a specific profession that functions effectively with geographically dispersed contributors producing lines of code. Time will tell the extent to which AI performance evaluation invades our world. But, the fact is AI is here now, and many will explore ways to exploit it.