by Eddie Garmat

A Year In

Just a couple weeks ago marks the end of my first year with AlignIQ. Throughout this year I’ve learned a whole lot about AI, how it works, how people perceive it, and where it’s heading in the next couple years. From literacy to consciousness, these are the most interesting things I’ve learned in my first year working for an AI startup.

Student sitting at college dorm room desk.
Natural college habitat.

AI Models Don’t Think — They Predict (Kind of Inefficiently)

If your social media feeds look anything like mine, you’ve seen countless videos, articles, books, and more claiming to explain how AI works. While some of these slightly disagree on minute details, they have largely the same idea. Now I won’t go into detail, but basically large language models (LLMs) such as ChatGPT, Gemini, Claude, etc. take the input they’ve been given and predict the most likely next word based on the billions of pages they’ve read. Once they output this next word, they run the entire conversation through this algorithm again and predict the next word. This is done over and over until the most likely next word is finishing the output. Many people tend to assume AI thinks similarly to a human. While its outputs may be reminiscent of how a human expresses thoughts, AI is mostly predictive text and does not actually understand what it is doing. We explore this misunderstanding in Pushing our Buttons: AI’s Default to Truth.

From the outside, LLMs are huge black boxes that take an input and magically give a (usually) accurate output. Before working for AlignIQ, that’s sort of how I viewed AI, but working here has given me the chance to really dive deep into how it all works and what’s going on, and it’s been fascinating the whole time.

If you’re interested in learning more in-depth about how AI works I recommend either What Is ChatGPT Doing … and Why Does It Work? by Stephen Wolfram (2023) which is fairly in depth and talks about the mathematics of machine learning or How the hell does Ai actually work?? by Douglas Wreden (2025) which is a far less formal and in depth explanation.

A Lot Fewer People than I Thought are ‘AI Literate’

I’ll be honest – when I joined this company, I had a few doubts about if the service we offered was valuable. I wouldn’t have stayed if I thought we were simply going to be a dotcom bubble-style company (big promises, little idea on how to make money, ultimately go bankrupt), but I was scared that it would be hard to find people that needed our services. Most people around me at the time at least new AI’s most common limits, misuses, and abilities. These people would not want to take a course about AI upskilling. What really opened my eyes was our recent partnership with Non-Profit Builder. With their help, we’ve been able to reach out to many more customers than we could before. Many of these new customers had a far lower levels of AI literacy and, indeed, tech literacy than I expected. I thought people would be requesting seminars on secure, ethical, and practical AI usage when in reality we’ve been looking at offering trainings for AI basics (what is AI, how to use it, etc.), basic do’s and don’ts of AI, and simple tasks it can perform. The sheer need for ‘basic understanding’ courses was far beyond what I ever expected a year ago. This demand is great for business and was certainly a positive surprise for me.

Another version of this came up in my “Why Are Some People So Resistant To AI?” piece. In it, I examined the claims made by John Pittard, a Ph.D. in the philosophy of religion at Yale Divinity School. Pittard asked if (the Christian) God would want humans to create beings with spiritual significance or would want to keep that power to Godself. From the outside, it may seem like LLMs think, but as talked about in the previous section, LLMs are mostly just massive math equations. They do not think, hold no spiritual significance, and likely would be of little concern to God in that sense. Pittard seems to not see it this way, going on to ask further questions of ‘Should churches baptize AI?’, ‘Should AI be allowed to partake in communion?’, and more. A math equation you may write on a piece of paper is hardly different from AI, and neither should be considered for such questions like baptism and partaking in communion. As I got further into this job I began to realize that many people, including those who specialize in how AI relates to their field like Pittard, don’t know that AI is mostly just a math equation, yet they speak of models like people.

More Data Doesn’t Make Better AI Models

What would you rather teach your kids the English language with the works of Shakespeare or the entirety of X (formerly Twitter)? For most people, they would likely answer Shakespeare. It’s simply a much higher-quality resource even if it’s less by volume. The same circumstances can be seen with training AI. Though modern models have scraped nearly the entire internet for data, higher quality data such as professional papers, real problems and solutions, and conversations between people are far more valuable than bots arguing on social media or a fake news website talking about how the Earth is shrinking. Even as a kid I thought feeding a computer model all human conversations and having the computer predict words would only get better with more data, but modern models are proving that large quantities of low-quality data are leading to low-quality outputs, even if they have endless amounts of data. This slightly goes against my intuition as I thought that so long as you see more and more of something the better you can get at predicting how to express it. However, then I realized that if you’re mostly or only viewing low-quality examples, you’re more likely to express something of low quality.

Looking Forward to Learning for Another Year

My first year with AlignIQ has been incredibly eye opening to me. From understanding how AI models work to understanding AI literacy levels — even of experts — to how better AI models are trained, I’ve learned a lot. Now I’m looking forward to another year with AlignIQ and learning even more.

What are some of the new things you’ve learned about AI in the last year? Let us know!

[In this post, we used AI for polish, not purpose.]


Leave a Reply

Your email address will not be published. Required fields are marked *