Artificial intelligence (AI) is being jammed into everything these days and every big tech company seems to be making their own version in a bid for competitive advantage. It’s a big deal as the techno optimists seem to view AI as the realization of some kind of post scarcity utopia where humanity delegates all of its work to a machine god of its own making. The techno cynics think that instead of moving into utopia we will either move into a dystopia, with humanity enslaved to the machine god or we will be destroyed by it. I use the term “Machine God” when referring to the idea of what AI is capable of deliberately as when one reads about AI there is a sort of religious reverence and fear for what it could do. As the meme might go, AI is tech bros trying to reinvent God. Yet to some extent this is true, its potential is viewed as omnipotent, its potential reach omnipresent and with both in hand they see AI as humanity’s way to Heaven or Hell, both of its own making.
Personally, I think they are consuming too many hallucinogens and taking Sci-Fi renditions of the technology too literally. They are right on one thing though; AI’s impact is and will be enormous. I’m not talking about AI’s impact in a “taking our jobs” kind of way, though while important, I think it will ultimately be limited. I personally believe, as someone who studies propaganda and how organizations utilize it, that this is where the real potential of AI’s danger lies. If you’ve interacted with an AI powered chatbot you know that they can be pretty convincing at pretending to be human. Which brings up an important point, how do you tell if the stranger on the internet is real or an AI? Now, provided the AI isn’t clearly labeled or divulges it’s an AI, the average person doesn’t really have a way to tell. The inability to differentiate person from AI will only become more problematic as time goes on. This will lead to an increase in paranoia regarding internet strangers, forever on guard against another bot trying to get your trust to sell you something or convince you of some belief. On the other hand, we will see an increase in people who only ever interact with AI, forever convinced of the belief that their digital echo chamber of imaginary friends is real and has their best interests at heart. In short, the future users of the internet will be schizophrenic: paranoid, suspicious of everyone’s motives, seeing people that don’t exist and hearing things that never happened.
To really understand the enormity of this impact its helpful to talk about what our current level of AI is and is not. Currently, all of our AI is based on what is known as a Large Language Model (LLM). These work by turning words like “Apple” into a numerical token that looks like “12234”. Now you take some external data, like from a database, tokenize it, and put it into a memory bank. The machine learning part is basically taking all that data in the memory bank and doing a very advanced statistical analysis on it. Generative AI is you prompting the AI, which tokenizes your prompt and effectively runs it through the statistical model to guess what the answer is based on other answers, then it generates a response from all that data. Agentic AI, the newest topic, is basically AI that acts without direct prompting, you give it a task, it guesses what is in that task and executes accordingly. What we don’t have is Sentient AI with internalized objectives or more simply, a will of its own. At this point we have a very fancy computer that is very good at guessing. Which If you ever wondered why AI needs so much data to get better at its job, this is why: the more data the AI has, the better it becomes at guessing what you want.
For AI to achieve anything useful it needs massive amounts of data, preferably real data. This is important to companies, large and small, because things that took hours or weeks before can take a fraction of the time, even if you have to double check everything the AI does. Whether it’s cheaper or not isn’t exactly relevant, the sheer productivity increases more than make up for it, as any company which can do double the work in the same amount of time will outcompete everyone even if the cost is the same. Thus, there is a profound incentive to create and/or acquire the most accurate AI possible, which requires massive amounts of data. Where will they get this from? Social media companies will sell what they have, websites sell cookie data all the time, some of it is just public domain and the rest is under copyright, private or the like. Regarding privacy laws, copyright, and people’s personal work, let me ask you a question. If you are competing for dominance in the AI race, are you going to follow the law and not steal data or violate copyrights even if following the law could hurt your ability to win the race? Given this same choice, do you think your competitors will follow the law? It’s the classic prisoner’s dilemma, where all the competing tech companies have a choice, do they break the law, violate the privacy, and steal the hard work of billions for a competitive edge or do they follow the law and risk losing out? Much like the classic prisoner’s dilemma, if you want to win the race, you should break the law, if your competitors follow the law, you win, if they don’t you are on even footing. So I think that it’s an entirely reasonable expectation that no company that is serious about AI will actually follow any laws regarding privacy, copyright or the like.
Yet how does an aspiring tech company gain competitive advantage in the AI market if everyone is stealing all the data they can’t legally acquire? The answer is proprietary data; every large tech company like Apple or Microsoft already has a massive customer base, data that they could very well keep private. There is an incentive to this to, by keeping the data they collect on their customers they prevent their competitors, big and small, from using that data in their AI models. This is where you create competitive advantage, your AI is more tailored to your existing customer base, because you have data your competitors do not. But how do you get data that customers want to keep private? Afterall, the more date you have, the better your AI model is and the better your AI model is, the greater your competitive advantage. Well, you could spy on them. If we apply the previous logic, if you don’t spy on your customers, you’re incurring the risk of losing the AI race by forfeiting your proprietary data. Yet how will they spy on their customers to acquire this data? The simple answer is that, by incorporating their proprietary AI into their existing technology and ensuring such technology only functions with the AI present, they can use the AI to spy on you. The actual act of spy will probably look something like this: the AI gets fed data from you in real time like an intrusive computer virus, every search, every keystroke, every text, post and phone call. The argument will be that this data improves both the AI’s functioning on both your device and the larger model that the company owns. The best part for the company is that it makes this intrusion of your privacy a requirement for service, so if you want to use the newest smartphone, tablet or even program, you effectively need to agree that the parent company can spy on you for its AI model.
You can probably guess where this eventually leads; if you have a massive trove of data on your customers and a piece of technology which flouts all known encryption methods, the government agency that deals with spying probably wants to have a word with you. Now, for a country with robust privacy laws and a healthy aversion to authoritarian impulses, a big tech company can keep this data “in house”. However, what about China or Russia or any other repressive state with a profound interest in repressing dissent? Does a company comply with those nations, or does it lose the ability to do business in those countries? If your competitors comply, they have customers and therefore data you don’t. Meaning you are incurring a massive competitive risk if you don’t comply. This means you are probably going to comply with these repressive countries and work with their agencies in the pursuit of their policy goals. Now, I don’t foresee any country realistically allowing foreign powers to have sole unlimited access to the data of their people, it’s not smart and invites a great deal of potential harm. So now non-repressive countries are probably going to want access to your data and how those other countries use that data. Meaning sooner or later large tech companies and their proprietary AI models will be a very integral part of the intelligence apparatus for the purposes of data collection and predicting what foreign adversaries are doing.
Now every government ends up being incentivized to co-opt tech companies, integrating them into their intelligence apparatuses and using the ill-gotten data to inform policy decisions. There is profound danger in this. Imagine having nigh every piece of data about an individual at your disposal. Every text they ever sent, every private conversation they ever said within microphone’s reach, every embarrassing picture or video, every corrupt action or lie, everything. You could stop a terrorist attack before it ever gets off the ground, you could arrest a corrupt legislator or you could blackmail political opponents, punish people for dissent they never voiced outside their own home. How many people would you trust with that power? How many people would you trust to hold that power over you forever? How long before those you do trust with this power start using it for personal gain? This is where the future is currently headed, the companies that are adopting AI and developing their own will attempt to gather all the data they can, legally or not, and governments, repressive or well meaning, will want access to that data for their own purposes.
But something that hasn’t been addressed is how this data and the AI being trained on it, will actually be used beyond spying. Think of anything that can be done digitally, coding, art, data entry, marketing, writing, etc., companies will try to get AI to do these things instead of people. Why? Well, one, it’s cheaper over the long term since the electric bill is generally lower than a competitive compensation package. As a bonus AI doesn’t have days off. However, most of the early adopters have found that this is a foolish strategy since what they have is effectively an advanced guessing machine and thus requires constant handholding. But really, AI is in the earliest stages of integration, which means there will probably be a lot of failures to replace people with AI in the long term. Its highly likely that in most cases AI will try to replace someone’s position or task, fail terribly and the company will end up having to refill the position or re-delegate the task. However, this won’t be true across all industries, especially data driven industries. As stated before, AI requires massive amounts of data and a continual influx of new data to improve the model. So, it stands to reason that the places AI will find the most success in is in fields that rely upon a constant influx of new data.
Which brings us to how AI will be used in terms of marketing, propaganda, and politics, the three fields that have the biggest interest in an individual person’s data. For marketing, this should already be familiar, you talk about something somewhere or look something up and suddenly you see ads for it all over the place. Now, let’s put an AI in your phone that takes every search, every conversation within reach, every text, every social media post, everything and then uses it to update its own model. It then creates a profile specific to you and generates individualized ads that are designed to create a positive response. Imagine you’re looking for toilet paper and you get a video from Charmin on your social media feed talking about you, by name, and how it’s that time again to buy toilet paper and how Charmin is always there for you with low, low, prices, complete with locations of where to buy. Very creepy and very likely to be what the first wave of AI driven personalized advertising looks like. Don’t believe me? Go to ChatGPT or Grok or any other AI, tell it to come up with an ad design for a new drink or something, describe the following, your age, ethnicity, area you live, what job you do, what you like and don’t like. Take the response and feed that to an AI image generator with the instruction to create a banner advertisement using what the other AI gave you. Within about 5-10 minutes you should have an advertisement that looks pretty close to what you might be sympathetic to. This is using publicly available AI, using info anyone could probably find out about you, without any other outside assistance. Imagine what a dedicated marketing team could do with all of your data, using a vastly more powerful AI dedicated to creating advertisements for people. This is the future of marketing, you do your thing, browse the web, watch some videos, read the news, amuse yourself with memes, shop on Amazon, etc. The AI in your new smartphone/tablet/laptop is watching this in real time and updating all the other AIs within the same zip code about it and collectively, they build a profile based on you, then tailor make personalized advertising for you. Which if you know anything about how metadata and advertising works, this is basically already what they do, AI will just make the process even more intrusive and effective.
Following this vein of thought of people trying to sell you something, lets talk about AI and propaganda. This is where I think the bulk of AI’s downsides will be. First let’s define propaganda, since everyone has a different definition of it, here is my technical version for the would be academics:
Propaganda is the intentional manipulation of a target public by elite actors for the purposes of changing or maintaining target public beliefs to achieve a predefined end- state.
In layman terms, I’m talking about how people and organizations in positions of influence and power intentionally manipulate the public and government officials. Now most people will think about the immediately obvious stuff, deepfakes, misinformation, bots, etc. These are all things that currently exist without AI and all things that have only improved with the introduction of AI. However, there is a greater danger, lets return to the previous Charmin example. You say something bad about Charmin, like “it’s not very absorbent” or something. Suddenly you and your entire social group start seeing stuff pop up that refutes what you’re saying. Most of it even seems legitimate, like a video in your feed doing a toilet paper product review. You might, after some time and some of your friends switch their opinions, start questioning your own judgement on the subject. Imagine that with any major product, company, or organization you criticize. If you think this sounds absolutely crazy, just remember that the tobacco industry in the 1950s created a massive propaganda apparatus to convince Americans smoking wasn’t bad for them that included their own scientists, technical spokesmen, dozens of non-profits reinforcing these ideas, and more. Many Chinese citizens don’t know about the Tiananmen Square massacre of 1989 or many of the more negative things their country is responsible for. All that AI is doing here is taking the existing propaganda apparatuses that exist and tailoring its existing messaging specifically for you and your friends. All which is only possible because of all that data about you the AI is being fed as you navigate the digital world.
AI and politics is where things get a bit messy, because politics isn’t just about making a buck or maintaining a reputation; politics is about power. So what? Why should we expect AI to make things any messier than our usual politics? Well, in an authoritarian dictatorship not much is at stake except Comrade Siri is listening for signs of dissent. In a country with a democratic form of government things get a bit messier. Let me explain. How do you think most legislators get input from their constituents during the day to day? Its email and over the phone, but mostly email. How would they determine if the emails were really from their constituents and not from a bot? You could cross reference voter data, IP addresses, etc. I personally did a proof of concept where I prompted ChatGPT to write me a constituent letter for a Republican congressman arguing against tariffs on China. Within about 20-30 minutes I had a piece of correspondence that sounded like a local with an 8th grade education wrote it complete with indignation and local slang. Within an hour I found out how to obtain the publicly available voter data in that congressman’s district, how to obtain the metadata for that geographical location down to the zip code and how to spoof the IP address to make it look like any email I might send originated from the correct area. In other words, in about 90 minutes, by myself, using publicly available data, I figured out how to convincingly impersonate a constituent from that congressman’s district.
With a dedicated team, a few million dollars and an AI without constraints I could run an entire influence operation that targeted every Republican (or Democratic) legislator with an entirely fake electorate. Imagine you’re a legislator and you get several hundred emails about a bill your party supports; these emails look to be from actual confirmed constituents, and they are all against the bill. How do you vote? Do you risk losing a primary by voting against those constituents’ wishes? No, you will probably vote against the bill, especially if your district is even marginally competitive. Is it illegal? Yeah, super illegal, but if it takes 10-20 years to discover, 10-20 years for even evidence to be gathered to indict you and another 5-10 years to work its way through the courts. You have 20-40 years of manipulating democratic society for your own ends and another 5-10 years on top of that before you even think of any actual punishment for the crime. Punishment, that if the tobacco industry and others is a guide, will result in a formal apology and a large fine. That is of course if you don’t get pardoned or help get a dictatorship installed. Think it’s crazy? Take an afternoon to look up astroturfing, which is basically setting up fake grassroots non-profits to sway the public and legislators about specific policy goals. What I predict might even be happening right now. This isn’t even talking about foreign governments or entities seeking to shift policy into a more favorable position for their own goals.
But where do regular folk fit into this political propaganda apparatus? Well, you already do have a place, as the consumer with the power to accept or reject arguments or information put before you. Hence why there is this profound incentive to constantly spy on the public because overall public opinion is very important. Countries like India, China, Russia, and Saudi Arabia are already using AI powered bots in an attempt to sway political, economic, and religious conversations in their own countries and abroad. My guess is that the USA and other western countries aren’t far behind. These are different from your average bot because they can seemingly come up with a realistic argument on whatever political subject they are pushing. At this point the only thing preventing these AI bots from basically outnumbering regular people is scale. Nobody has enough data centers and semiconductors to accomplish this, even if they combined all their resources. However, I think that within the next 10-20 years this will change, and the scale of digital propaganda will become immense. Imagine if you found out that 10% of your friends on social media were not just bots, but AI and that those friends were the ones you had the most engagement with. You might rethink some positions, what if that number was 20%? 30%? 90%? This is where we are headed, a government or organization takes their AI and crafts it to seem friendly to specific audiences. Furthermore, they ensure that its articulate and consistent enough to allay any suspicion to the point it seems real. Your data, which every AI developer is either buying or stealing, is being fed into this AI to become better at convincing you of a specific worldview. It doesn’t stop there though, that same AI is also being used to locate sources supporting its arguments, which themselves may be fabricated by another AI. In other words, governments and political actors will be trying to create curated echo chambers for whatever narrative best suits their policy needs. Where this really gets bad is that these AIs will create sprawling digital arguments, drowning out real engagement between real people with real lived experiences.
Trying to stop this technological revolution wholesale is a fool’s errand, it will continue for no other reason than that our enemies will pursue it. The best thing we can do is twofold. First, we can co-opt AI ourselves and utilize it to protect ourselves from its overreach. If AI is capable of spying on us, it is also capable of protecting us from being spied on. While this is currently prohibitively expensive, over time AI will become cheaper as they find more efficient ways to do it. Individuals will also begin programming their own AIs or figure out how to jailbreak the AIs present the devices and programs being used. The second method of protection is utilizing what government influence we as citizens and communities have to guard against AI’s misuse. In other words, regulation, oversight, and severe penalties for not adhering to those things. This is less viable as current governments seem firmly in the camp of bending to the tech industry. But that doesn’t mean it shouldn’t be pursued in preparation for a time when the strategy becomes more viable. I say this because there is a deep skepticism of the tech industry as it innovates largely by being exceedingly reckless, impulsive, greedy, and wasteful. Which ironically finds historical parallels with the industrial revolution and is likely to reach the same conclusion: that a revolution marked by recklessness, impulsiveness, greed, and wastefulness cannot be allowed to exist in such a state for long without inviting profound disaster. Thus, be loud, be vigilant and be ready for when the AI revolution inevitability does something it cannot justify so that it might be used to serve the many instead of the few.