By Eddie Garmat
Since you’re reading a blog post on an AI company’s website, you’re probably okay with Artificial Intelligence. But you probably also know that not everyone is as receptive. Some have protested the mass adoption of AI, claiming that it will destroy jobs, destroy the creativity of humans, and even destroy their own interests, responding with legislative pushback, such as New York State’s ban on DeepSeek on government devices and Texas’s investigation into its parent company for alleged security law violations. In this post we’ll look over why people are so resistant, what they’re doing to battle AI, and see what the future might look like.
Destroying jobs and creative destruction
Many people fear that AI will destroy many jobs. Numbers vary but the World Economic Forum reports up to 60% of companies plan to reduce some part of their workforce and replace them with AI. Despite the reduction, many fields are still predicted to see some kind of growth in the next 8 years, according to the Future of Jobs Global Report 2025 as reported in Forbes.
There are two prevalent arguments against this fear. The first is the fact that the job market is expected to grow despite AI’s job displacement. The report predicts a 26% increase in software development jobs, 23% in data analysis jobs, and an astonishing 36% in data scientist jobs from 2025 to 2034. Jobs specific to data and software aren’t the only ones expected to grow, either. Engineers are predicted to increase by 8%, lawyers by 5%, and doctors by 4%. In the case of software development, data analysis, and data science, these jobs are expected to grow since they are necessary for the growth of AI. In the case of engineers, lawyers, and doctors, AI streamlines many of their processes, but humans are still needed to do the jobs.
This brings up the other argument against the fear of job loss, creative destruction. Almost every new invention kills an old form of completing the same task. Just as the computer killed the typewriter which destroyed the printing press, which destroyed the hand-written book, AI is looking to kill varying jobs. In economics, this is called creative destruction. The term refers to the replacing of an old process with a cheaper, more efficient one. Often this new way of doing things still promises jobs. The printing press gave rise to print operators, the typewriter gave rise to many more writers, and the computer gave rise to just about any job but was merely a transition of hardware for existing writers. But almost no modern person would advocate for the elimination of the computer in favor of the hand-written book for the sake of jobs. Progress always eliminates some jobs but creates new ones while making the cost of goods cheaper.
Privacy concerns
Another leading criticism of AI is a perceived lack of privacy. According to a survey published by IAPP, 68% of those surveyed said they are either somewhat or very concerned about their privacy online, with 57% saying AI poses a significant security threat. The New York Post reports New York State has even banned access to DeepSeek on government devices, citing privacy concerns; further, Bloomberg Law states Texas has accused the company of violating the state’s privacy and security laws.
American models have also weathered strong opposition. One example is the ‘Goodbye Meta AI’ trend. A New York Post article discussed how many people have posted Instagram stories that say something to the tune of, ‘I do not consent for my data to be used as training data for an AI.’ Those who post this are typically concerned that Meta is taking their private data. Meta has said this is not a valid form of objection and that they only use publicly available data.
Alongside Meta, nearly every AI company has spoken about privacy. OpenAI claims that its business platforms (e.g. API, ChatGPT Teams, and ChatGPT Enterprise), will not use inputs from registered businesses to inform their models. For personal-use products, users may opt-out from OpenAI using their data for training. Meta and DeepMind (Google’s AI division) have both taken the route of saying data is collected but is stored securely. Open source models like Mistral have said their openly accessible code gives users more control over their privacy. Perhaps the company most dedicated to privacy is Anthropic, the company behind Claude AI. Anthropic states they collect relatively minimal data and do not use user input as training data for Claude. According to a recent study by London-Based security, risk, and compliance firm Holistic AI, “[After] conducting a jailbreaking and red teaming audit of the new model, [Claude 3.7’s Sonnet] resisted 100% of jailbreaking attempts and gave ‘safe’ responses 100% of the time.”
Malicious GPTs
Malicious AI models such as WormGPT, WolfGPT, EscapeGPT, and most recently, GhostGPT are models that are built to be completely uncensored and are marketed towards malicious actors. Abnormal Security ran an article about GhostGPT and wrote it is able to write trojan viruses and phishing emails. It does not track users’ activities, thus removing accountability of malevolent actors.
Despite the potentially large threat, most of the reaction has come from the cybersecurity community rather than the mainstream media. Although it seems like these malicious models pose a huge threat, they tend only to make things like phishing and malware more accessible rather than better. For now at least, simply being proactive about what you click on and react to is usually enough to avoid being targeted by somebody using these models.
AI Becoming More Persuasive
Aside from malicious GPTs, mainstream models are becoming better at arguing and convincing people to change their minds. According to an ArsTechnica study, some ChatGPT models are able to change nearly 90% of people’s minds on r/ChangeMyMind, a forum where people with views they are willing to change go to hear a new perspective. Among a sample of random humans, ChatGPT 3.5-o1 fell into the 82nd percentile of ability to change people’s minds.
While the arguments on r/ChangeMyMind are typically not very consequential, people fear that ChatGPT could be used to spread mass-misinformation campaigns, influence political decisions, or undermine people’s best interests politically. OpenAI has responded to these concerns by limiting its models’ abilities to analyze and make direct statements about people, banning accounts associated with authoritarian regimes, and instructing ChatGPT to “assume an objective point of view.”
Threats to Values
Many religious scholars have questioned if AI fits into the values of their respective religions. John Pittard, a Ph.D. in the philosophy of religion, went onto Yale’s school of Divinity’s podcast to talk about AI’s threats to the values of Christianity. He claims that speaking to a chatbot threatens the bonds and reliance on one’s neighbors. He asks if God wanted humans to create beings with spiritual significance, or if God would keep that power to Him (or Her)self, and calls for a global standard for AI moving forward. While Pittard does not specify what this standard should include, he does say that the guidelines should be explicable and understandable as well as enforceable. [Editor’s note: Isn’t someone trying to globally enforce a religious tenet perhaps a little bit more dangerous than people confusing AI with God?]
Christianity is not the only religion fearing AI’s metaphysically creative abilities–Muslim scholars say Allah is the only one that can create life and wonder if AI models can experience consciousness. Some are concerned that AI could become an unauthorized ustaz (spiritual teacher).
Despite these fears, some other scholars of both religions are hopeful that AI can be a force for good. Even Pittard believes that AI can still be used for all its popular features as long as it does not experience consciousness against God’s will.
Lack of Understanding
AI models are pretty hard to understand. It is human nature to avoid something one does not understand. This comes from the primordial instinct to approach the unknown with caution, as even a plant or small creature could be fatal. Despite our incredible advancements in technology, this tendency is seared into human consciousness, which causes many people to not want to use AI. The other big fear that stems from this lack of understanding is that AI could become destructive.

Pop culture loves to portray AI as a violent threat beyond humanity’s control. Movies like Terminator, Ex Machina, and The Matrix all depict some kind of artificial intelligence trying to kill the main characters. This has instilled a great fear of any kind of non-human entity that can seemingly think for itself, so large language models perfectly prey upon that fear. Both the fear of the unknown and the fear of a threat to humanity come together incredibly well to cause a fear of AI for those who don’t understand it. However, so far, just about the worst most AI models can do to a user is data harvest, which as we’ve covered, not all models do.
Conclusion
There are plenty of reasons people can be resistant to AI, from religious fears, to fear of malicious intent, to simply not understanding the technology. However, as technology progresses, many people should start to see AI more as something to be embraced rather than feared.
Leave a Reply