In an effort to understand the human mind, philosophers and scientists have looked toward complex technologies to help explain psychological phenomena. In the Middle Ages, philosophers compared the brain to a water pump, which was largely influenced by the prevalence of hydraulic systems as a newly discovered innovation. During the mid-19th century, models of the brain resembled telegraph technology, dubbed the “Victorian Internet,” as understanding neural activation flowing across nerves was likened to information flowing down telegraph wires. Today, many see computers and robots as potential models of the brain, as evidenced by the popularization of the computational model of the mind and advances in artificial intelligence. While analogies offer a simple basis of comparison for the brain's abundant mysteries, they can also make complex technology and, by proxy, the brain, magical and inaccessible (Anderson). As a result, our society glorifies technology as infallible, impartial, and infallible. As a result, we have created more roles for technology, especially robots, to become even more involved in our lives. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an Original Essay One human-occupied role that is beginning to show promise for replacing robots is in the job interview process. In recent years, Australia's La Trobe University has collaborated with Japan's NEC Corporation and Kyoto University to create communication robots with emotional intelligence to help conduct job interviews for companies. These robots have the ability to sense facial expressions, speech and body language to determine whether potential employees are "emotionally fit and culturally compatible" ("Matilda the Robot"). The first robots were called Matilda and Jack, but now they are joined by similar robots Sophie, Charles, Betty and two other nameless robots (Nickless). Dr Rajiv Khosla, director of La Trobe's Research Center for Computers, Communications and Social Innovation, says "IT [information technology] is one of them". pervasive part of our lives, we believe that introducing devices like Sophie into an organization can improve the emotional well-being of individuals." Computers and robots are often limited to analyzing quantitative data, but communication robots like Matilda are capable of analyze people and their qualitative and emotional properties. These emotionally intelligent robots show promising potential to eliminate inequalities and biases in the employee selection process, but they will only be able to do so within specific parameters. Emotionally intelligent robots may be able to to help reduce employment inequality because they don't have implicit biases like humans do. Unfortunately, our biases often prevent us from making fair and equitable decisions, which is especially evident during the job interview process, National Public Radio science correspondent Shankar Vedantam describes research findings involving the effect of bias in the interview process. In one study, researchers found that the time of day the interview is conducted has a profound impact on whether or not a candidate is chosen for a job (Inskeep). This means that something as seemingly inconsequential as circadian rhythms, one of our most primitive instincts, can be complicit in influencing our best judgment. Professional employment serves as a meansprimary income and status indicator. Given the importance of this role, we should strive to create a fair system for all job applicants, but complete fairness may not be possible if human biases cannot be controlled. In addition to basic physiological factors, these biases also extend to racial biases. In 2013, John Nunley, Adam Pugh, Nicholas Romero, and Richard Seals conducted research to understand the job market for college graduates across racial lines. They submitted 9,400 online job applications on behalf of fake graduates with variations based on college majors, work experience, gender and race. To indicate race, half of the candidates were given typically white names, such as “Cody Baker,” while the other half were given typically black names such as “DeShawn Jefferson.” Despite the same qualifications among fake applicants, black applicants were 16% less likely to be called back for an interview (Arends). Therefore, racial bias, even if unintentional and unconscious, can create injustices in the job interview process. In light of these implicit biases influencing the employee selection process, robots are a viable option for conducting objective and fair job interviews. Although robots are often thought of as human convenience machines, they have the potential to equalize opportunities, especially in situations where humans think and behave irrationally. Robots operate according to purely logical algorithms, which allow them to not be influenced by irrational prejudices and to strictly adhere to specific criteria. Since a candidate's credentials cannot necessarily be measured quantitatively and are therefore subject to qualitative biases, it may be fairer for them to be evaluated by an objective machine. However, the use of robots with the aim of eliminating bias is not a panacea and must be approached with caution. Although robots act logically, they only do so within the parameters of their programmed algorithms. If a program is coded to be inherently biased, it follows that the machine on which it operates will perpetuate that bias. Last year, Amazon was accused of using a “racist algorithm” that excluded minority neighborhoods in major cities from its free same-day delivery Prime service, while consistently offering the specialized service to predominantly white neighborhoods. The algorithm data linking maximum profit to predominantly white neighborhoods was the direct result of decades of systemic racism, which caused gentrification between high-income, white and low-income, minority neighborhoods. Paradoxically, low-income neighborhoods excluded from service would benefit more from free additional services, while high-income neighborhoods that received it were more likely to have easier access to quality, low-cost goods. While Amazon claimed to only use facts, which stated they would not make a profit in excluded neighborhoods (Gralla), they ultimately used an algorithm based on skewed socioeconomic data to perpetuate racist patterns. Another similar, and perhaps more relevant, example of biased programming is Microsoft's Twitter chatbot experiment. Last year, Microsoft released chatbot software called Tay, designed to interact with teenage Twitter users by impersonating their language. Soon after it was posted, Twitter trolls forced Tay to utter racial slurs and other derogatory statements. When Tay posted more offensive tweets, Microsoft disabled the program and released a.
tags