
We’re in the midst of a major extinction event (according to a few scientists and other intellectuals). America has the highest number of mass shootings in the world. The Amazon rainforest is on fire.
Real-world crisis after real-world crisis is the norm now, it seems.
It’s exhausting in every way, and there seems to be little to no help when a world crisis is occurring. It’s enough to drive a person a little mad, never mind entire workplaces, social groups, or whole cities.
Worse, crises caused by people could be stopped given enough warning, given the right resources and procedures. But no human can meticulously predict future incidents down to the second, or know exactly where the next shooting or robbery will happen, and mistakes that cost lives are mental and psychological torture to whoever makes them. Even if people make it out alive, there are always questions about what to do next. Or what to do during.
The solution? Post Google’s algorithm up and let it find real-life crises, like mass shootings and hurricanes, then have it gather up the information you need at that moment and send it all your way.
It works something like this:
Perhaps you know someone close to you in California, in an area where wildfires are common due to the native flora. A wildfire causes widespread damage and will kill anyone and anything that can’t escape its reach. While heading to work or on a break, you get a notification from a major news outlet like CNN about the wildfire in the area. More likely than not, a search engineer at Google figured out that information about the crisis will be heavily sought after.
The search algorithm, in response, will surface the most accurate information (with the help of a search engineer) and Google will increase signals to try and keep the information current and relevant to the ongoing crisis/crises.
Who else does this?
According to Matt Southern, Google’s algorithm adjusts from the weight of authority signals, from ‘quality checkers’, who ensure that necessary and accurate information like breaking news articles, social media updates, and important checklists or steps to take comes up first, as opposed to blogs, articles, and/or think-pieces about facts regarding shootings or natural disasters.
Real-life crises have been so prolific, almost commonplace, that even social media like Facebook have ‘Crisis Response’ functions, where once you find a safe place with working Wi-fi or 4G, you can simply send out a quick notification to your followers that you are no longer in danger and are alive. Crisis Responses have been used most often in the aftermath of mass shootings, such as past ones in Las Vegas and Manchester.
The alarming rise of shootings, coupled with the lack of knowledge regarding what to do, increased the weight of Google’s engineers’ workload and brought their ethics to the forefront. Misinformation about any terrifying incident, like a shooting, spreading like California wildfires was unacceptable, but what to do? No one could read the minds of shooters, and emergency services consisting of people can only do so much in a crucial window of time.
So the algorithm was developed and improved upon, backed by evaluator guidelines that are enforced by a senior search engineer and their search quality raters.
These raters must carefully check the adjustments, post-Google’s algorithm, to ensure that the results satisfy whatever a user types in the search engine, as well as ensure that the quality main content on the page contains the expertise, authoritativeness, and trustworthiness from a reputable source.
Though the raters have always held the most sway in decisions regarding the algorithm, guidelines for content have been there since 2013. Only in recent years, from 2017, have search engineers and the raters ground down on general misinformation, extremist content, hate speech, and fake news in the efforts to reduce them and eventually eliminate them from the algorithm.
Before these crackdowns, the algorithm could be manipulated into spreading misinformation by members of hate groups and dangerous ideologies repeatedly searching for certain keywords and phrases, which would bring those pages to the first page of search results and appear to be mere facts instead of problematic misinformation.
One disturbing story recounts the search query “did the holocaust happen” surfacing pages with content that denied the holocaust, a case that Google definitely did not want a repeat of. It brought to attention that such an incident was indicative of a larger, more pervasive problem that could not be solved by targeting pages and results, one by one. The algorithm had to be tweaked then released to control misinformation faster, more efficiently, and on a global level.
The algorithm adjustment is very much a solution that finds the source of a problem and fixes it by fire; a strong hit to ensure that nothing ‘infects’ a certain section of the web again.