The Empire Strikes Back: The Revolution Against AI

0

On May 14th, Ernie Smith, founder and editor of the Tedium newsletter, in the midst of covering Google’s globally-streamed Google I/O event, noticed an easy to miss tweet from Google Search Liaison Danny Sullivan. While Google was announcing to the world that “AI Overviews” were now going to topline all search results, they also launched a new “Web” filter.

Smith posted about his discovery on Tedium, along with an explanation on how to use the filter. Within days, others jumped on the bandwagon. With the help of Johan van der Knijff, Twitter user @ZenithO_o, Mastodon user Donald Hobern and Twitter user @ChookMFC, the filter took on a life of its own.

Within a week, Smith launched a new website. UDM14 — subtitled “the disensh-ttification Konami code” — uses the filter to automatically strip results of all content generated by artificial intelligence, along with ads, knowledge panels and other often annoying “features” that now appear on the search results page.

Hours later, word spread across social media, thanks to Tedium readers like Lauren McKenzie, who blasted out to her nearly 19 thousand followers on X:

good news 🙂 there’s a website https://t.co/VVWeo9DpEs https://t.co/67IGiXNGx3 pic.twitter.com/f112soKf4X— Lauren McKenzie (@TheMcKenziest) May 22, 2024

This is just the latest blow; one of the many ways that human creators are fighting against the empire of the machines known as artificial intelligence. But there are other ways to fight back, too.

Artificial Intelligence (AI): A Brief History

The first form of artificial intelligence, or AI, was developed in the 1950s. Alan Turing — a mathematician known as the father of modern computer science — first proposed the mathematical concept of AI called “Computing Machinery and Intelligence.” Turing rose to prominence decrypting German intelligence messages for the British government during WWII. His paper was the first to cite “The Imitation Game” as the answer to the question “can machines think?”

Also known as the Turing test, “The Imitation Game” is a process that judges a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Filmmakers also used it as the title of the 2014 film starring Benedict Cumberbatch as Alan Turing.

In layman’s terms, Turing didn’t believe that machines could ever think — rather, they can only pretend to do so, and their ability to do so depends on the quality of the data inputted into them. In computer science terms: garbage in, garbage out.

Turing, however, only posited artificial intelligence as a theory, rather than a practicum. It would take another two years for computer scientist Arthur Samuel to develop a program that played checkers, and this is the first-ever recorded instance of a successful AI deployment.

Throughout the decades, artificial intelligence development went through various peaks and valleys. Longer valleys, times when funding was pulled from the studies and applications, caused by failures of larger initiatives were referred to as “AI winters.” The 1960s was considered a “winter” period thanks to the failure of machine translation — think early forms of Babelfish or Google Translate. After a surge in the 80s, the 1990s experienced another AI winter when many expert systems were abandoned.

Despite these “winter” periods, computer scientists were optimistic about the future of artificial intelligence. In 1970, computer scientist Marvin Minsky told Life Magazine, “From three to eight years we will have a machine with the general intelligence of an average human being.”

It would take several more decades for Minsky’s prediction to come true, but 2012 saw a renewed interest in the practice thanks to the prioritization of deep learning over machine learning. Programming based on human neural networks, which requires explicit instructions gained ground over computer systems that learn and adapt based on analysis of algorithms. That led to the development of generative AI — a computer program that takes prompts, such as text, images, audio, and video, and generates new content based on those prompts.

From Innocuous Applications To Disastrous Results

Despite the presently perceived looming threat, artificial intelligence began its “life” in the most innocuous way. For example, recommendations on Netflix — generated based on the user’s “thumbs up” or “thumbs down” rating of a television show or film — use artificial intelligence, as do coupon recommendations generated by supermarket “loyalty cards.” As an example of the latter, CVS coupons — located at the end of the pharmacy giant’s notoriously long receipts — are generated by an algorithm that uses AI to predict what consumers may want to purchase from the store in the future.

Furthermore, there is some benefit to artificial intelligence, especially as it pertains to the processing of data. For example, the sheer volume of customers that shop at CVS daily necessitates the use of artificial intelligence to successfully process the data necessary to generate coupons and increase sales. Artificial intelligence is also essential to the effective functioning of banking systems, manufacturing, and healthcare — along with any other industry dependent on the input of a large volume of data at a super-human speed to keep up with demands.

That, at its core, was the point of artificial intelligence in the first place: to process data and perform other mundane tasks so its human users could focus on complex, nuanced tasks. Indeed, industry leaders see how artificial intelligence can redefine, and restructure, existing positions, much like the Industrial Revolution did for machine workers in the 1760s.

“With AI tackling task-based work, humans have the opportunity to move up the value chain,” Marc Cenedella, founder of Leet Resumes and Ladders, told CNBC. Cenedella compared the current artificial intelligence revolution to the shift in mid-century offices when computerized word processors replaced floors of typists. Cenedella, like other experts, also noted that today’s artificial intelligence should be “human-centered,” meaning that it should enhance human collaboration by removing mundane tasks from their to-do list, not replace people outright.

Big Tech Embraces AI – At Their Peril

In February, Reuters reported Google paid Reddit $60 million in what the tech giant called an “AI content licensing deal.” This was reportedly designed to help “train” Google’s artificial intelligence models. While Google parent company Alphabet dressed this new partnership up in fancy catchphrases like “generat[ing] new revenue amid fierce competition for advertising dollars,” at the core of this deal lay Reddit’s desire to take their company public in an initial public offering (IPO).

Reddit had been reportedly “eyeing” the IPO for three years before this “content licensing deal,” but was only able to do so less than a month after the deal. The IPO, according to Yahoo!, was $34 a share and a market value of $6 billion; the largest IPO for any social media stock.

Reddit’s motivation for the AI content licensing was profit over people — and, to be clear, the goal of any business is to make money under America’s current capitalist model. But the result of this partnership, as 404 Media reports, was “for F-cksmith to tell its users to eat glue.”

Put simply, Google’s new AI Overview couldn’t distinguish between a legitimate post and a so-called “sh-tpost” These are comments either deliberately provocative or off-topic, usually designed to derail the conversation, but which can also be unintentionally funny. The latter is what made Reddit user F-cksmith notorious. Consequently, Google Search users were told to add non-toxic glue to pizza sauce to give it “tackiness” so the cheese doesn’t slide off the slice.

Seems the origin of the Google AI’s conclusion was an 11 year old Reddit post by the eminent scholar, fucksmith. https://t.co/fG8i5ZlWtl pic.twitter.com/0ijXRqA16y— Kurt Opsahl @kurt@mstdn.social (@kurtopsahl) May 23, 2024

While humans of average intelligence know better than to add glue to any edible substance, artificial intelligence does not. 404 Media asserts the AI’s inability to distinguish that has destroyed Google Search’s functionality and credibility as a legitimate source of information.

The Human Impact

Artificial intelligence is arguably the most revolutionary development of the tech age. In recent weeks, however, it seems to have fallen off a proverbial cliff — and human creators are fighting back against it in what is a long-overdue anti-revolution against its development.

In 2022, Pew Research Center noted jobs that require a higher degree of education, such as law clerks, have more exposure to artificial intelligence than jobs that require a lower degree of education, such as repairing equipment, police work, or work that merely requires vocational training.

While artificial intelligence may displace workers, its full potential to do so hasn’t been realized yet. In May 2023, the Challenger Report from Challenger Gray and Christmas noted 3,900 US job losses were linked directly to AI. A 2022 sociological study at Brigham Young University determined while perceptions of loss are high, only 14% of workers have experienced job displacement due to artificial intelligence. While it can be argued that even one job lost is one job too many, these numbers are a far cry from the projection of 42 million workers being displaced by artificial intelligence by the year 2030.

That’s why many industries — including creative/creator industries — are taking a proactive, rather than a reactive, approach. They’re vehemently pushing back against artificial intelligence, standing their ground and exerting their importance both in the workplace and the world at large.

What Can Humans Do About It?

A recent report from The Verge suggests that Google’s human employees are “scrambling” to fix the errors created by their AI Overview product, mostly in response to the now-viral errors generated by the machine equating sh-tposts to legitimate sources of information. Sensing the PR disaster looming, Google’s Powers That Be took a reactive approach — but even the creators of the Overview product believe irreparable damage has been done.

“A company once known for being at the cutting edge and shipping high-quality stuff is now known for low-quality output that’s getting meme’d,” one anonymous founder told The Verge.

Meanwhile, Gary Marcus, an AI expert and an emeritus professor of neural science at New York University, told the outlet “tech bro” dreams — which seem more like dystopian nightmares to creative types — are far-fetched and impossible to achieve. “[These models] are constitutionally incapable of doing sanity checking on their own work, and that’s what’s come to bite this industry in the behind.”

These inherent flaws in the system, then, leave room for pushback — and that’s something that both creatives and “regular” workers are taking advantage of in their approach. In December 2023, the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) struck a deal with Microsoft to ensure both unionization of tech workers and training on emerging technology, all but ensuring that they could never be displaced by artificial intelligence, according to The Daily Beast.

“This partnership reflects a recognition of the critical role workers play in the development, deployment, and regulation of AI and related technologies,” AFL-CIO president Liz Shuler said in a statement at the time. “The labor movement looks forward to partnering with Microsoft to expand workers’ role in the creation of worker-centered design, workforce training, and trustworthy AI practices.”

In creative industries, artists engage in a practice called “data poisoning.” This is deliberately inserting corrupted or malicious data to prevent the proper functioning of artificial intelligence. They see it as the best way to prevent algorithmic models from scraping their work.

Tools like Nightshade can further help creatives from getting their work corrupted by the algorithm. “You can think of Nightshade as adding a small poison pill inside an artwork in such a way that it’s literally trying to confuse the training model on what is actually in the image,” Ben Zhao, the head of the research team at the University of Chicago which created the program, told NPR.

The Bottom Line: AI Do’s and Don’ts

DO educate yourself on artificial intelligence and understand both its capabilities and limitations. The limitations of artificial intelligence make it easier for humans to fill the gap, and demonstrate their worth to their employers and clients in a new, exciting way.DO NOT participate in “photo challenges” on social media. A study conducted by Forbes revealed that such challenges were used to mine data for artificial intelligence algorithms.DO get your local politicians involved in the fight against artificial intelligence, but DO NOT rely on them to solve all your problems. Recent congressional hearings demonstrate that politicians on both sides of the aisle are woefully underprepared for the challenges brought upon by artificial intelligence, but that doesn’t mean that the cause is hopeless. Advocating for their constituents is, in fact, what they’re being paid to do; it’s worth reminding them that they can easily be voted out if they fail to deliver in a meaningful way.To that end, then, DO engage in grassroots advocacy and encourage your colleagues, friends, and family to avoid using artificial intelligence outside of its original data processing design.Above all else, DO NOT give up hope. As with all things in all ways, the proverbial pendulum cannot swing to the left or right without eventually balancing itself out; it is up to humans, then, to recognize that history doesn’t repeat itself, but it often rhymes.

“This is, fundamentally, an ongoing battle,” Elizabeth Shermer, associate professor of history with a focus on labor rights at the University of Loyola, told The Daily Beast. “It’s just like it has been since we’ve had the assembly line.”

 

FOX41 Yakima©FOX11 TriCities©