Enrichment Education

How artificial intelligence can help us get back to the humanity of college admissions decisions

Story at a glance

  • Artificial intelligence technology is slowly making its way into the college admissions process.
  • Companies like Kira Talent are using AI in tandem with software designed to reintroduce a more personalized process for applicants.
  • The company claims that the AI technology can help to eliminate subconscious biases amongst admissions committees.

The job of a college admissions officer is not an easy one. For any competitive higher learning institution, the admissions process used to hand pick each incoming student is one that has also drawn increased scrutiny over the years. 

To ensure the ongoing success of an institution, admissions officers are saddled with the nearly impossible task of efficiently evaluating thousands of applications each school year, with the expectation that their choices will reflect the institution’s standards, grow diversity and that the students chosen will then be inspired enough to enroll and attend classes in the fall. 

The process is a balancing act and one that is expected to proceed without gender-based or racial bias. The problem? Humans are inherently biased, and schools are now beginning to realize the faults in their traditional approach to admissions — one that has placed an outweighed emphasis on test scores and transcripts and often fails to find the human factor in their applicants. The flaws in this system also tend to leave underprivileged groups behind and keep underrepresented demographics as anomalies. 

Surprisingly, the solution to this issue — to this lack of humanity — might possibly be found through the utilization of artificial intelligence. 

“The mission of the organization is to bring a human aspect back into the admissions process,” said Andrew Martelli, the chief technology officer at Kira Talent.

The Canadian-founded company works with learning institutions around the world, in hopes of delivering a more holistic approach to reviewing candidates. Hopeful students applying to institutions that partner with Kira undergo a video interview process in which they will not encounter another live person. Instead, video- and text-based prompts lead you through a series of questions, and the applicant’s answers are then used to evaluate things like leadership potential, verbal and written communication skills, comprehension of key concepts, drives and motivations, and professionalism. 

Martelli tells us that artificial intelligence (AI) has entered the picture in a beta phase: one that is used not to evaluate students, but rather the admissions officers and their possible biases. 

“It’s almost more of a science experiment, to understand things like: are people accidentally or inadvertently introducing bias,” he said. “When schools express interest in it, they are presented with an AI-based tool that takes video data, and analyzes personality traits and behaviors. We take the very same footage that you view as an admissions person to get a sense of the  applicant, and we have them run it through a series of algorithms. Schools are then able to run the algorithms, which give them AI-based data to then compare to what their human reviewers said.”

“The idea behind the technology is to help the human reviewer ask questions of themselves. ‘Did I see these traits or qualities? Am I missing something?’ So the emphasis is not on using AI to replace the human aspect of the process. Our whole focus is on helping the human be a better evaluator of other humans.”

It’s those principles that the company utilized last year in their partnerships with schools like California State University (CSU) Fullerton. Members of the admissions committee were able to pre-record questions for the students to answer through video interviews. 

“Kira allowed us to bring our own personality,” said Deanna Jung, Assistant Professor of Nursing and Coordinator of Pre-Licensure Programs. “We have a diverse faculty, so there was a diverse group of individuals reading the questions. Students were able to watch those videos and think ‘okay, there are faculty who teach here who are like me.’”

Automating bias

Not all AI systems are created equal, though, or without unconsciously programmed bias. At the end of the day, data scientists are still human, meaning that many of the subjective choices that they make as they create and refine training data can lead to racial bias in machine learning systems.

Human bias is an issue that pervades nearly every industry and facet of life, certainly not just in the process of college admissions. Over the last few years, society has become acutely more aware of how these human prejudices can affect people’s lives. These biases can slip into AI systems creating what is called algorithmic bias, taking various forms from gender bias to racial prejudice and age discrimination. 

We’ve already seen how algorithmic bias can lead to damaging consequences, like in 2016 when Microsoft released an AI-based chatbot on Twitter. Its goal was to interact with people through tweets and direct messages, but within hours of its release began replying to users with offensive and racist language. The chatbot, which was trained on anonymous public data and utilized a built-in internal learning feature, was targeted by hate groups to introduce racial bias into its system. 


America is changing faster than ever! Add Changing America to your Facebook or Twitter feed to stay on top of the news.


Luckily, researchers are working daily to figure out how to mitigate the possibility of introducing racial bias into AI-based systems. One postgraduate researcher at the Massachusetts Institute of Technology, Joy Buolamwini, even founded the Algorithmic Justice League, with the objective to highlight the social and cultural implications of AI bias using both art and scientific research. 

Combatting summer melt

A much more successful AI-based messaging experiment than Microsoft’s 2016 disaster was recently utilized by Georgia State University. The same year, the university introduced an AI chatbot called Pounce, whose objective was to reduce what schools refer to as “summer melt.”

Summer melt is what happens when enrolled students drop out during the summer, before their first fall semester even begins. According to the university, Pounce was able to reduce the occurrence of summer melt by an impressive 22 percent, which translated to an additional 324 students showing up for their first day of classes in the fall. 

Realizing the power of communicating with their students through text message but not having the human power to implement it, Georgia State partnered up with the Boston-based education technology company AdmitHub. 

More than half of the university’s students hail from low-income backgrounds, and many of them are first generation college students — a demographic that has shown the need for individual attention and financial aid, both of which aid enrolled students in showing up ready to start classes once the semester starts. 

The admissions team worked with AdmitHub to identify these obstacles and fed information and answers into Pounce, which students could then direct their questions to at any time of the day or night by text message. In the first year of implementing Pounce, the AI-based system had answered more than 200,000 questions by incoming freshmen. 

“Every interaction was tailored to the specific student’s enrollment task,” says Scott Burke, assistant vice president of undergraduate admissions at Georgia State, on the university’s website. “We would have had to hire 10 full-time staff members to handle that volume of messaging without Pounce.”

The future of AI and education

What experts seem to agree on is that the sole use of AI will never be best practice for college admissions decisions, at least for now. Nevertheless, AI-based systems can serve an increasingly important purpose for schools, not only streamlining teams and processes, but also promoting education about unconscious biases amongst admissions officers. 

“I do believe that schools continuously look for ways to adjust their practices. I think COVID has also caused people to take a hard look at the processes that they use — to try to find ways to make them more convenient, to make them more accessible, to make them safer because of the social distancing and other requirements,” says Martelli. 

“I also think a lot of the social movements that we see in place today have asked for schools to take a harder look into their practices and the processes, and the ways they make these admissions decisions.”

As far as the future of AI-based systems, Martelli preaches cautious optimism, saying that it has to be implemented in the right ways. The chief technology officer said that along with the promise that AI shows, there is a lot of danger as well. Experiments over the years have shown just how easy it is for algorithmic bias to make its way into an AI-based system, and Martelli says that a biased sample could only serve to perpetuate some of the problematic decision making of the past. 

“When you think about using those kinds of tools, we still think it needs a person at the heart of the whole system to make the judgment about another human,” he says. “Do I think there’s promise there? For sure. Do I think we have to be careful about how we apply it? 100 percent.”

A version of this article can also be found on The Hill


READ MORE STORIES FROM CHANGING AMERICA

‘CHILDREN ARE DYING’: ACTIVISTS COMPARE RESTRAINTS ON SCHOOLCHILDREN TO KILLING OF GEORGE FLOYD

WHITE SUPREMACISTS DRIVE US DOMESTIC TERRORIST ATTACKS TO HIGHEST LEVEL IN 25 YEARS

LAWMAKERS HOLD FIRST ANTI-ASIAN DISCRIMINATION HEARING IN DECADES AS US REACHES A ‘CRISIS POINT’

TENNESSEE PASSES BILL REQUIRING BUSINESSES TO POST SIGNS SAYING THEY ALLOW TRANS PEOPLE TO USE THEIR BATHROOMS

BIDEN STILL CONSIDERING CANCELLATION OF AS MUCH AS $50K PER PERSON IN STUDENT DEBT

US SECRETARY OF EDUCATION CANCELS $1B OF STUDENT LOAN DEBT


changing america copyright