Opinion by Walt Jimenez
About Me:
College Graduate of Multimedia and Computer Science.
Multiple Cisco Systems Certifications.
IQ of 127.
30 years of I.T. experience since '95.
25 years of professional experience.
Been interested in computers since before Google (2004).
4 year student of Muay Thai kickboxing and martial arts.
Lived 3 doors down from a Canadian Military base for my first 20 years of life.
A Video Gamer for life.
Deepfakes started appearing on the internet between 2017 and 2018, which sparked my initial interest in AI. I've been experimenting hands-on with actual Generative AI for about four years now since 2021. It has advanced by leaps and bounds especially between 2023 and 2025. The two types I’ve primarily used are image generation and image upscaling. I’ve also lightly used AI with text like Chat GPT (generative pre-trained transformer) which became available in November 2022.
Here are some of my opinions regarding AI and its common use in today’s tech lifestyle in general.
For the responsible user, AI can be an incredible tool. It can create complex graphics and literature, and solve problems in a fraction of the time that it would take for talented human beings. These are beautiful things when they are used in good will and for knowledge and entertainment. However, when AI tools are in the wrong hands, it can produce too much fake information, misinformation, disinformation, and some truly diabolical things. AI is not bad, but can be bad when used the wrong way by the wrong individuals and evil people.
Firstly, most AI tools that can be used today require you to sign-up and subscribe to the service. AI has been incentivized by Capitalists and is forcing people to pay to use it most of the time. There are free tools out there, but are often very difficult to find among the sea of buyable/subscribable services. What shook things up recently on January 20th, 2025 was the announcement of DeepSeek r1. A free open source AI developed in China that rivals the expensive OpenAI o1 developed in the United States.
The second thing that I do not like about these AI tools is that they acquire your usage. When you use the tools, all of your own input, typing, queries, and results and all of the data is collected and sent back to the AI databases to be used in the AI training models.
What really is ideal is to have the AI tool itself and its database on your own local computer without being connected to the internet. Having an AI that is enclosed and limited and not transmitting your input and information feels much more safe and comfortable. Knowing that your information and input is kept private and safe is much better than having it out in the wild so to speak. If the AI tool was sectioned off or quarantined for your own personal use, it would not grow but then your information would also not be out there. I discovered one such program called LM Studio that is private and confidential and is essentially the same as Chat GPT. Here is an Instagram post of Will Francis explaining how to get it and use it. (https://www.instagram.com/reel/DFBHGRIIC9N/?igsh=QkFNYUx0VlpPMg%3D%3D)
Just about every service out there is collecting information from you to train its AI model. Here’s a list of some internet apps that you probably don’t realize are actually profiting off of your information and queries by feeding your data into their own AI model. And while most of your data is kept private between you and them, the apps themselves still collect data as usually outlined in the EULA fine prints or End User License Agreements.
Microsoft - CoPilot, Cortana, Bing,
Google - Gemini, Google Search, YouTube, Google Maps etc.
Meta - Meta AI, Facebook, Instagram, WhatsApp, Facebook Messenger, Threads
Apple - Apple Intelligence, Siri, iMessage, Messages
Amazon - Amazon purchases
TikTok - any content
Chat GPT, Stable Diffusion, Sora, Midjourney, ElevenLabs,
All of these big names and virtually all AI apps are actually collecting and training on your data, especially mobile apps on your smartphone that require you to input a query. I’m not saying all of this to discourage you to stop using any of these tools but I just want you to be mindful when using them. They are so deeply planted within our modern tech lifestyle that it’s pretty much impossible to avoid any single one of them. The data is usually used to predict what you’ll choose next so that it can be recommended to you directly. I plan on doing a different post on how to help safeguard your online presence and increase your privacy and cyber security. I’ll have more on decreasing your digital footprint later.
I’ve made a playlist off of YouTube that explains many things about AI.
Here is my playlist so far: (https://www.youtube.com/watch?v=IBe2o-cZncU&list=PLkWEvbkl6ZX7RRt0L6UXtlVonvUC7LvWB&ab_channel=ColdFusion)
1.) Who Invented A.I.? - The Pioneers of Our Future [https://www.youtube.com/watch?v=IBe2o-cZncU&ab_channel=ColdFusion]
2.) A.I. ‐ Humanity's Final Invention? [https://www.youtube.com/watch?v=fa8k8IQ1_X0&ab_channel=Kurzgesagt%E2%80%93InaNutshell]
3.) China’s DeepSeek Sparks Global AI Race [https://www.youtube.com/watch?v=-KK8SuvwoRQ&ab_channel=ColdFusion]
4.) China's slaughterbots show WW3 would kill us all [https://www.youtube.com/watch?v=6D4rsqxqSIc&ab_channel=DigitalEngine]
I liked a secret video so much that I used an AI tool to convert the audio and transcribed it into text.
The video labelled "AI Visually Explained in 12 Minutes". The video has been wiped from YouTube but here is an alternate working link below.
(https://drive.google.com/file/d/1R75Rjc_JQa-3XoNnTz8twijRU70c2xCm/view?usp=sharing)
I also used a second AI to spell check, and correct the grammar back into its original reading structure. Here is the transcript from that video.
Chapter one, the types of AI
Before AI, machines were built to follow our specific instructions. they did things based on conditions we programmed into them. They were rules based but AI is different. Instead of giving machines instructions on what to do and how to do it we train them to think and do things on their own like raising a child. This is why people call AI a black box. it turns your input into an output based on similar things you've shown it in the past but you don't know what's happening in the box exactly. AI basically combines your input in the moment with the data it's been trained on to generate an output and there's three types of AI boxes you should know about and you can differentiate them based on their output.
First, we have predictive AI which labels things based on prior data like marking an email as spam, identifying someone in a picture or recommending what you should watch.
Generative AI creates new content like text images and videos. Agentic AI outputs actions based on a given task like self-driving cars. They take a destination as an input then they plan the trip and drive the car while stopping at traffic lights following speed limits and not running over pedestrians.
Another example is AI agents like cloud computers. If you give them a task, they can access your computer apps or accounts to execute actions on your behalf.
These are the types of AI but you can also define AI by its level of intelligence. narrow AI is built for specific tasks which applies to all AI systems we have today. Artificial general intelligence or AGI is theoretical for now but would match human intelligence across all tasks and scenarios but nobody agrees on a formal definition of what AGI actually is. And finally, artificial super intelligence is also theoretical but would surpass human intelligence by far.
chapter two how AI is created
Imagine you adopted a pet robot and you wanted to teach him how to think and do things on his own. This is called machine learning and there's three ways you can do this. you could take your robot around the world and point at every object to teach him what it is over and over again. you're spoon feeding him information until he recognizes what each thing is. This is supervised learning and you're feeding a machine a sandwich of labeled data like reviews, emails and even x-rays. You could also take a hands-off approach by letting him learn on his own. Imagine you were leaving the house and you told your robot to sort the dishes. It would start grouping things based on similarities in shape, size and color. This is unsupervised learning and it powers systems that group similar items when you're shopping, browsing pictures or getting song recommendations. The final way you could teach him is with positive and negative reinforcement like a coach. Imagine you asked your robot to make you a smoothie. It has no idea how to do it at first so it starts experimenting with random recipes. After you taste each smoothie you get the results and you have a back to back assessment of what you did and what you haven't done until you reach the top of your list. it a thumbs up or a thumbs down. Over time the robot learns to make smoothies just the way you like them. This is reinforcement learning and it's one of the ways GPT was trained to improve its answers and the reason why your algorithm knows you too well. Creating intelligent machines isn't just about the training method, it's also about the quality of data that you use. if your robot learns from low quality information, he'll only give you low quality results. garbage in garbage out. high quality data is more valuable than ever but we might run out of it soon. One research group predicts that we'll run out of publicly available data between 2026 and 2032 at the current pace of AI development. This is why AI companies are signing expensive licensing deals to find new training data. The other solution being explored is using synthetic data generated by AI to create new training data.
chapter 3 how AI becomes biased
Training an AI system is like raising a child. The parents play a big role in deciding the values they teach them, what they can watch on tv and where they live. And these factors and these factors shape a child's world view. Imagine a child who grew up in Antarctica and the only animal they ever saw was a penguin. They would think that penguins are the only animals in the world but they have an incomplete world view because they haven't seen all the other animals that exist. AI has the same problem which can limit its abilities but also create unfavorable outcomes for some people and this bias can come from different sources. First, the beliefs and values of people can influence how AI systems are designed and trained. If the child's father was an avid penguin enthusiast he might constantly talk about how penguins are the best animals on earth. The child would adopt the same belief because it's all they've been exposed to and this is the same reason why some chatbots will respond differently if you ask them controversial questions. Bias can also exist within the training data if it has any imbalances that ignore the nuances of the world. If the parents only give their child books about penguins there is no way for them to learn about new and different species. This is why language models might give you lower quality responses in languages other than English. Bias can also emerge from incorrect and over simplistic rules that a machine identifies during training. The child who has only seen penguins would think that animals are only creatures that are black, white and have flippers and this is why image generators associate certain jobs with specific genders and races. Unfortunately there is no reliable solution to eliminate bias from AI. After all, bias is part of being human and if we're trying to replicate the way humans think and act, bias is inevitable. AI is only a mirror image of us after all.
Chapter four, How AI generates text.
Imagine you wanted to become a DJ. You would start by listening to as many songs as possible. You would break down each song to analyze the sequence of beats and instruments that were used. And after lots of listening, you become comfortable creating unique beats and mixes that sound good to the ear. Large language models are built in a similar way. They're trained on millions of pages of text. Each piece of text is broken down into tokens which are numerical representations of words. With billions of examples, machines can remember the sequences of tokens to create language that sounds good to the ear. Asking AI to generate text for you is like requesting a song from a DJ. Based on your request, the songs learned during training, and the mixers, the DJ creates a unique music mix for you. With language models, your text prompt is combined with the training data and parameters to create a new mix of text. Text generation is like a math formula. It takes your inputs, multiplies them by specific parameters to give you a token of text. Except that language models have billions of parameters. AI generates this new mix of text by predicting the sequence of tokens one at a time based on all previous tokens. And each token is the result of hundreds of billions of calculations. Like a DJ anxiously trying to keep the crowd going after every beat switch. But sometimes things go wrong. Since every token is a best guess, these guesses can be wrong when asking for factual information. These are called hallucinations. Remember that language models are text generators, not truth generators.
Chapter five, how AI generates images.
Imagine you were a sculptor and someone asked you to make a statue. You would get a stone block and start chiseling away until you get your masterpiece. AI generates images in a similar way. It starts with an image full of random pixels and gradually adjusts them until you get the final result. But how does it do that? The process is called diffusion. These models are trained on billions of images with text descriptions. But machines don't see images like we do. They see them as grids of pixels and each pixel is represented by three numbers for red, green, and blue. When you train an AI model on billions of images, it starts recognizing what words are associated with which pixel patterns and values. It creates a multidimensional map called the latent space. And it maps specific features like a shape, artistic style, or color to their pixel values and patterns. To capture all these fine details, these models are trained by taking each image and slowly randomizing all the pixels until it becomes pure noise. Then the model is trained to reverse the process to reconstruct the original image. So when you ask AI to make an image for you, it's replicating that reversal process by taking an image with random pixels, adjusting the pixel values based on the latent space, until it constructs the image you want.
Chapter six, the energy cost of AI.
Every time you ask AI a question, that request is sent to a data center and processed by thousands of AI chips. Because the hardware in our phones and laptops isn't powerful enough to do it, these data centers need two things to operate. First, they need electricity to run. In 2024, data centers use 2% of the world's electricity, but this number is for all data centers that keep the internet running. We don't know what share of that can be attributed to AI alone. It's estimated that a single AI request uses 10 times more energy than a standard search, but this estimate is based on AI models from 2023, which are much smaller than the ones we have today. Second, they need cooling systems to prevent overheating. Most cooling systems use fresh water, and the average data center consumes 300,000 gallons of water every day, which is equivalent to 19,000 showers. And depending on the cooling method used, data centers may lose some or all of this water and need constant replenishment. Unfortunately, we don't have precise or reliable numbers for how much energy or water AI uses alone, because tech companies aren't revealing the size of their newest AI models. What we do know is that AI developments and usage are likely to keep increasing over the next few years, which will require more data centers, AKA more electricity and water than before. Thankfully, some solutions are starting to emerge to make AI more energy efficient. And here are three you should know about. First, we could run AI on our personal devices instead of data centers, because AI is more efficient than data centers. Because they've been getting more powerful hardware. This is called edge AI, and we're already seeing signs of it. For example, the new iPhone can run basic AI features on your device directly without sending those requests to a data center. Another solution is called model distillation. The idea is to create smaller models based on large models to make them more energy efficient and cost efficient for specific tasks. Because we don't always need the biggest and most advanced model if a task is simple. The same way we don't need a Formula One car to deliver a pizza. Finally, the AI chips and data centers are getting more efficient, which means they need less energy to produce the same output. Over the past eight years, the energy cost to generate one token on an NVIDIA GPU went down from 17,000 joules to 0.4.
Chapter seven, a brief history of AI.
The story of intelligent machines begins in 1950, when Alan Turing famously asked, can machines think? And proposed the Turing test as a measure of human-like intelligence. But the term artificial intelligence wasn't born until 1956 during a summer research project at Dartmouth College. From the fifties to the seventies, the AI field received generous government funding from DARPA and the field witnessed its first breakthroughs. Like Eliza, a chatbot that could talk to you like a psychotherapist. The hype and expectations around AI at the time were sky high. Some researchers even made bold predictions, like saying that machines would be as smart as we are within eight years. But the industry faced a massive blow once it couldn't deliver on its promises. Two government reports from the US and the UK concluded that AI was more expensive, less reliable, slower, and not solving any useful problems yet. This sent the industry into its first winter where funding dried up until the eighties. The field went through a slight resurgence after Japan invested heavily in its computing industry, which pressured the UK and the US to do the same. But they failed to deliver on their promises again and entered a second winter. At this point, 40 years had gone by with no fruitful applications of AI. And the industry was at rock bottom. Some computer scientists were even ashamed to say that they worked on AI and avoided the term completely. But things were about to take a dramatic turn in the late nineties because the perfect storm was brewing. First, computers were becoming way more powerful quickly. Computer chips today are 40 million times more powerful than they were 50 years ago. Second, there was way more digital data available as we moved all of our lives and work online, which could be collected as training data. And both of these things enabled machine learning algorithms that can replicate and simulate intelligence. Since the 2010s, the AI field has slowly recaptured the public's attention and became an overnight sensation with the release of ChatGPT. Big tech companies are racing to the top. Everyone has an AI startup and people are torn between the benefits AI can provide, the harm it creates, and most importantly, the existential question. If AI can do it all, what am I good for? The future is hard to predict, but it feels like the real, real story is just beginning. I hope all of this gives you a good understanding of what AI actually is.