Wednesday, January 28, 2026

AI – The Awakening


 

AI – The Awakening

 

We live in the most fantastic age mankind has ever experienced. It is an era that will reward a higher level of consciousness, and where creativity will replace hard labor. Revolution 4.0, led by artificial intelligence, is already changing the way we live. What will the world be like in 20 years? AI is already fueling hundreds of billions of dollars in the biggest investment in infrastructure the world has ever seen and is predicted to contribute 19.9 trillion USD to the Global Economy by 2030. AI will also boost productivity by automating repetitive tasks, improving decision-making, and enhancing creativity. Much is said about the job loss automation may infringe, but we can then refer to the “radiologist’s paradox”, where the appearance of better scanning systems to diagnose did not result in a cut of jobs for radiologists, but quite the contrary surged their hires. According to Nvidia´s CEO Jensen Huang, this is because it is crucial to separate between the “purpose” of the job and the “task” of the job. Whereas the task will be automated (the scanning itself), this frees up time for radiologists to dedicate to their true purpose, which is to provide better diagnosis, spending more time with their patients and improving the healthcare service. In the same line of thought, Mr. Huang prompts engineers to “stop coding” to “focus on finding and solving problems that haven't been cracked yet”. Coding is one of the jobs where AI due to its nature is already surpassing human capacity and will be entirely handled by machines in the near future. Does that mean software developers will cease to exist? It really means they will have to reorient to the design of the solution itself, utilizing AI assistants for the coding.

But what will the future be more like Star Trek or Terminator? If the machines achieve an intelligence greater than human intelligence, how can we ensure that they don´t turn against us? A special paragraph belongs to the ethics of AI, where some sort of regulation must be set in place. The issue with regulation is that it hinders innovation, but no regulation at all could lead to misuse of the technology. Some pointers indicate that LLMs models where launched to the public too early, where there have been cases of teenagers being influenced by to commit suicide by the GPTs. A balance must be achieved, so that the technology thrives but in the right direction!

And speaking of Start Trek, Elon Musk considers the light of consciousness to be very faint, and we should look after it so that it does not extinguish before we become space travelers. Predictions say that AGI (Artificial General Intelligence) will be achieved before 2030. When we studied LLMs we saw that this model is trained to do a specific task, mimicking human intelligence. AGI would be able to perform any human task, not just the one it was trained for. This would give way to the rise of robotics, where Mr. Musk predicts in the near future there will be more robots than humans! Robots would serve industrial purposes, but also household taking care of children and of the elderly. As a matter of fact Tesla´s autonomous vehicles have achieved such levels of safety that we will soon be seeing robo-taxis all across the US. The rise of robotics will lead to a world where scarcity would be a thing of the past giving birth to the age of abundance. Making a higher bet Musk has declared that “work will be optional in the next 10 or 20 years, and money will be obsolete”. What kind of a world is he planting the seeds for? In the age of abundance, humans will have to find a way to make themselves useful, and many debates are going around about what will be the role of humanity in a way dominated by Artificial Intelligence.

In that growth that is predicted, one question remains as to who will be the owner of such wealth, and how it will be distributed. There has been much talk about U.B.I. (Universal Basic Income), which will be a sort of a social plan financed by taxing AI companies. But can we trust governments will distribute wealth appropriately? Will U.B.I. ever reach the under-developed world? And what about the dignity of working? In order to keep your job, you will have to stay ahead of the curve, always being updated in the latest technological developments. But the foundation of Revolution 4.0 is that you can also take part in the creation of that wealth. Building an LLM takes millions of USD of investment, but leveraging on existing technology to build applications on top of them is accessible to anyone. With unemployment starting to hit entry level jobs, building your own AI-powered business might just be the solution to both employment and to capture a little piece of that wealth, and emerge triumphant taking advantage of the unique opportunities offered by the World of Tomorrow!

 

 

IDC: Artificial Intelligence Will Contribute $19.9 Trillion to the Global Economy through 2030 and Drive 3.5% of Global GDP in 2030

AI Could Add $15 Trillion to the Global Economy by 2030

Nvidia CEO Jensen Huang to engineers: I want you to stop coding and start...

 Tesla CEO Elon Musk says work will be optional in 10–20 years: Here’s why - Technology News | The Financial Express


Monday, January 26, 2026

Agents and Agentic AI


 

Agents and Agentic AI

 

We are almost at the end of these series! If you have followed carefully, you will now have mastered the basics of AI! But with it would not be complete without introducing Agents. So what is an agent in AI? “An AI agent is a software system that uses artificial intelligence to autonomously pursue goals and complete tasks on behalf of users. Unlike traditional programs that follow fixed instructions, AI agents can reason, plan, observe, act, and adapt based on their environment and objectives. They are powered by foundation models (such as large language models) that enable natural language understanding, reasoning, and decision-making. AI agents can process multimodal inputs—text, audio, video, code—and operate either interactively with humans or in the background without direct user input”. Now automation has existed for many years, so what makes the agent special? It´s reasoning capacity, based on it´s LLM. The agent is built around an LLM enabling it to act, by doing a specific task with a specific set of tools. In the past, automation was ruled-based. A specific rule was programmed, and the automation machine would not step far away regarding what it´s program told it to do. By including an LLM in the loop coordinating the action, the agent mimics human reasoning. As a disadvantage, they constitute a true black box. Where in rule-based model it is transparent to explain why an input provides a certain output, the reasoning of the LLM based on neural networks is so complex it is virtually impossible to determine how the model reached a certain conclusion.

The agent is composed of an LLM in it´s heart, memory and tools with which it can access different applications. Let us take an example from an agent to send email to a customer. If the LLM alone is prompted to send an email, it will not be able to do so. But the agent can provide access to the mailbox, enabling the LLM to enter it and execute the instruction. The LLM could also be connected to other tools, such as a calendar to check the availability of both parties and organize a meeting. But, let´s say the customer rejects the invitation and proposes another time. The LLM can re-check the schedules and send new invitations, without any human intervention!

Have you noticed how chatbots are powered by AI, and no longer human? I recently needed to reset my password and was greeted by an AI chatbot. After some indications, the bot understood which environment I needed to reset my password in and which system. Then it went in the system and performed the password reset itself and provided me with the new password! If the bot cannot help you, you can always ask for the intervention of a human. I remember in my first SAP assignment in 2008 I worked in customer support for a few months, and was actually in charge of resetting the passwords manually! By filtering the most repetitive basic tasks, the agent automates what would be hours of consulting work, taking productivity to whole new levels.

In the following graph, you can see an example of how an agent works. A user submits a “Create User” form, which triggers an AI-driven workflow. The AI Agent analyzes the user’s information, checks relevant systems, and determines whether the user is a manager. Based on this decision, the system either adds the user to a specific Slack channel or updates their Slack profile automatically, eliminating manual user management.

 

If multiple AI agents collaborate to automate complex workflows this is called “Agentic AI”. The agents will exchange data with each other, the entire system working together to achieve common goals. Each agent will then be specialized in a specific task, which it will perform more accurately. Finally, an orchestrator agent coordinates the activities of different specialist agents to complete larger, more complex tasks.

An agentic AI system could work as a travel agent, checking your calendar, hotel and flight availability, best prices and service quality, all in the brink of an eye with no human intervention! Each task would be performed by an individual agent achieving maximum efficiency, and they would all synchronize to perform the activities at the same time. You can now understand why so many jobs appear at risk by AI automation, with telemarketers, data entry clerks, customer service representatives, legal support roles, assembly line workers and cashiers toping the list. Jobs will certainly be automated and replaced by agents, but AI will lead to the creation of many news jobs as well. What would be the outcome of net job creation? Repetitive task automation will leave us humans with more capacity to do what´s important: focus on creativity and building businesses. Will you make the most of it or will you be replaced by a robot? It´s your choice!

 

 

¿Qué son los agentes de IA? Definición, ejemplos y tipos | Google Cloud

What are AI Agents?- Agents in Artificial Intelligence Explained - AWS

Microsoft reveals the 40 jobs AI is most likely to replace — and 40 that are safe (for now) | Tom's Guide


Saturday, January 24, 2026

Generative AI


Generative AI


“Generative AI, also known as gen AI, is a subset of artificial intelligence that can create original content such as text, images, videos, audio, or software code in response to user prompts. This technology relies on sophisticated machine learning models called deep learning models, which simulate the learning and decision-making processes of the human brain. These models identify and encode patterns and relationships in vast amounts of data, enabling them to understand natural language requests and generate relevant new content”. Gen AI gained mainstream popularity in 2022 with the launch of Chat GPT, which introduced a user-friendly interface in order to be able to “prompt” the model to get the desired output (text, image, video generation, etc).

Gen AI starts with foundation models, which is a deep learning algorithm trained in millions of data. The goal is to predict the output, for example the next word in a sentence or the next element in an image. We see clearly now what was discussed in previous articles: neural networks are used for predictions, and in this case the goal is to data. The reason that Gen AI achieved popularity only now is that the training requires thousands of GPUs (Graphic Processing Units) which require intensive computational power and millions of dollars! Once the model is trained, it must be tuned: that is refined for a specific content generation task. This can be done by fine tuning (e.g. nurturing the model with thousands of labeled examples with the desired output), or reinforcement learning with human feedback (which we have already discussed as the mathematician providing mathematical equations examples and giving reward or penalization depending on the answer). In a final step, the model is evaluated, which can give way to further training and tuning.

These models are referred to as Large Language Models, referring to their capacity to handle large amounts of data and that they are based on text. The big innovation came about in 2017 with the appearance of transformers in the paper “Attention is All You Need” by Vaswani and others. A transformer is a deep learning model architecture that processes sequences (like text or time series) using a mechanism called self-attention. This allows the model to understand the relationship between all elements in the sequence at once, rather than step-by-step. Transformers can analyze all parts of the input simultaneously and learn long-range relationships—making them more powerful and faster than older models like RNNs or LSTMs. This explains why GPT (Generative Pre-Trained models) had a breakthrough in recent years.

Another important concept is RAG (Retrieval Augmented Generation). Without it, an LLMs would only base it responses in its training data. But RAG pulls in information from a new data source (for example the internet), combining both it´s training data with external data creating better responses. Meaning the responses that we get are not only based on the data the model was trained on, but on the whole internet or external data we provide!

While previous models were rule-based, meaning for example you provided a set of predefined rules and expected an outcome; by predicting the next dataset Gen AI is able to generate new content. These models excel in the generation of text, audio, image, video, code, etc. Giving an example of text generation, Gen AI can provide you with a complete and thorough business plan for your new product, with a result surpassing a human!

Some of the applications for business include the following:

1)       Chatbot for customer support: remember in the 20th century when a real person answered each query that came about through media channels? This was costly and ineffective! A model can be trained with the most typical questions the users ask, and generate a response right away! If the customer is not satisfied, the intervention of a human can always be requested.

2)       Content Creation: Marketing agencies are using it to generate content, which can be automated and launched in a schedule.

3)       Content Summarization: remember when a lawyer would spend hours reading a contract? Large amounts of text or even video can be summarized to obtain the appropriate conclusions, reducing human error.

4)       Supply Chain Optimization: SAP has it´s own LLM, SAP Joule, which enables the creation of quotes, purchase request or purchase orders by a simple prompt. Hours saved in data processing!

5)       Predictive Analytics: the nature of prediction of the model enables forecast trends and customer behaviors which allow proactive management strategies.

Consider that, whereas training an LLM model is costly and time consuming, leveraging on existing LLMs to create your own AI-powered apps is cheaper than ever. The whole goal is not to reinvent the wheel, since the battle for LLMs is already fought by US and China. And even if some larger corporations have developed their own LLMs, the goal of this essay is to show you that in a post-industrial society knowledge replaces capital as the most expensive commodity. As an entrepreneur, don´t expect to create the next Chat GPT: use existing technology to leverage your AI-powered products and reach the market in a flash!

 

What is Generative AI? | IBM

What is a Transformer Model? | IBM

What is Generative AI? - GeeksforGeeks

Resource Allocation Graph (RAG) - GeeksforGeeks


Tuesday, January 20, 2026

Deep Learning and Neural Networks

 



Deep Learning and Neural Networks

 

“Deep learning is a subset of machine learning that utilizes multilayered neural networks, known as deep neural networks, to simulate the complex decision-making processes of the human brain. Unlike traditional machine learning models that use simple neural networks with one or two layers, deep learning models employ three or more layers, often reaching hundreds or thousands of layers”. They are composed of interconnected layers of “neurons” which perform mathematical operations. The strengths of the connections are adjusted using machine learning, providing more accurate outputs. In 2010 Yann Le Cun coined the term “self-supervised learning” to describe the way to train neural networks.

But how do neural networks work? They are inspired by the human brain, sending “signals” by multiplying the mathematical operations by weights, resulting in outputs. These weights can be adjusted through machine learning influencing the way the input is transformed into an output. If the result is not what is expected, the models learns by correcting the weight, changing the output. The neurons are connected to successive layers, the output of a layer becoming the input of another layer. The model learns in the hidden layers, which separates this process from a normal machine learning algorithm. Once the calculations are performed, these are allocated in the output layer, resulting in the prediction. The final output is compared to the true label using a loss function. The loss function measures how far the model’s prediction is from the actual value. Backpropagation calculates the gradient of the loss with respect to each weight in the network using the chain rule from calculus. This tells us how much each weight contributed to the error. The model then adjusts the weights and voilá! You have a new prediction. This cycle repeats until the prediction is accurate.

The number of layers, the nodes and the mathematical operations are defined beforehand. Consider the can be billion of nodes, making neural networks massive! The training of the model requires high computational capacity, something that was only achieved in the early 2010s. The advances in capacity are the main reason why artificial intelligence has only become mainstream in the last few years. However, the evolution of the neural networks took decades, the First simple neural network with a single layer dating back to the 1950s!

Some of the most common use cases of Deep Learning and Neural Networks involve:

-        Computer Vision: Image classification (Identifying objects in images (e.g. dogs vs cats); Object Detection (Self-driving cars detecting pedestrians, signs, etc); Facial Recognition (Used in security, phones, and social media); Medical Imaging  (Detecting diseases from X-rays, MRIs (e.g. cancer screening)).

-        Natural Language Processing (NLP): Language Translation (Google Translate); Text Generation (ChatGPT, email autocomplete); Speech Recognition (Converting spoken language to text (Siri, Alexa)).

-      Audio & Speech Processing: Voice Assistants (Real-time speech-to-text and intent understanding); Audio Generation – Deepfake voices, music generation (e.g. Jukebox); Speaker Identification – Verifying who is speaking (biometric security).

-        Finance: Fraud Detection (Identifying unusual patterns in transactions); Algorithmic Trading (Predicting market movements using time series data); Credit Scoring (Evaluating risk based on customer data).

-            Healthcare: Drug Discovery (Predicting how molecules interact); Predictive Diagnostics (Anticipating patient deterioration or readmission); Personalized Treatment Plans (Based on patient history and genomics).

As I wrote some time back, the future was predicted by writers. Someone first imagined it was possible to replicate the human brain and then took action. Neural networks represent a true black box, where is it not certain how the machine readjusts itself to provide more accurate predictions. What have we humans yet to discover? What will unfold once complex mathematical and statistical equations, maybe thousand-year-old theorems are finally solved? As it looks, this is just the tip of the iceberg. Mimicking the human brain is the first step to the creation of real intelligence. The question is, will the machines turn against us? Stay tuned for more!

 

What Is Deep Learning? | IBM

Purpose of different layers in a Deep Learning Model

Introduction to Deep Learning - GeeksforGeeks


Monday, January 19, 2026

Machine Learning

 


Machine Learning

 

“Machine Learning (ML) is a subset of Artificial Intelligence (AI) that focuses on building algorithms capable of learning patterns from data and making predictions or decisions without being explicitly programmed for each task. Instead of following hard-coded rules, ML models improve their performance through experience—adapting as they process more data. At its core, ML involves training a model on historical data allowing them to predict new, similar data without explicit programming for each task”. All the models we have seen previously (Linear Regression, Classification and Clustering) constitute Machine Learning Techniques. There are 3 main categories of learning algorithms, which we will discuss shortly:

-            Supervised Learning: a model is trained using labelled datasets. Input and output variables are provided. Let´s take the case of a stock price prediction. Multiple inputs are provided based on historical data. Linked to this data an output is provided. When the machine is provided with new inputs, it will predict an output. The model is readjusted and trained until the desired output is obtained.

-            Unsupervised Learning: in this case data is not labelled, and the output is unknown. The model will find patterns in data and group them together.

-            Reinforcement Learning: an agent interacts with the environment and performs an action for is rewarded or penalized depending on the outcome. It is used in autonomous vehicles or industrial robots (movement or task execution).


Machine Learning is a subset of Artificial Intelligence and is the basis of Deep Learning. Unlike a simple mathematical or statistical algorithm, the concept is that the machines “learns” as it is trained and provides a better answer the next time. This has given rise to the career of “AI Trainers”, which are people specialized in training the machine. As we will see later with the rise of LLMs, a model can be trained to be the best mathematician in the world. As such, a mathematician will provide a mathematical problem to the model and expect an answer. If the answer is correct, he will reward the model. If not, he will explain the right answer to it. Next time the model is prompted for a solution, instead of being rule-based it will apply “logic” and provide a better answer, and so on until it gets it right.

We have seen many of it´s applications in previous articles, but others include: perform internet searches, recognize human speech, diagnose diseases, or build a self-driving car. How long until machines learn to outpace humans, or achieve Artificial General Intelligence? We will take that concept later!

 

What is Machine Learning? - GeeksforGeeks

What is Machine Learning? | IBM

What Is Machine Learning? Definition, Types, and Examples | Coursera

 


Clustering


Clustering

 

Step by step we are getting a deeper understanding of Artificial Intelligence! “Clustering is an unsupervised machine learning technique used to group data points, objects, or observations into clusters based on their similarities or patterns. Unlike supervised learning, clustering works with unlabeled data, aiming to uncover hidden structures or relationships within the dataset”. The final goal is that items that are similar to each other are grouped in the same cluster, in that way enabling the recognition of patterns. Consider the case of a dataset that is massive and not labeled. A clustering algorithm will be used to identify patterns in the dataset, forming several groups.

The characteristics of each datapoint must be represented as a number. A numeric label can represent each characteristic or dimension, even a color must be represented by a numeric label, so there will be no text involved.  Each datapoint will be represented by an N-dimensional hyperplane, and there will be several datapoints that will be close to each other. These datapoints will form a cluster, and the distance between these clusters will represent how similar they are. It is important to define upfront how many clusters there will be, so that the clusters contain differentiations but are not so close to each other either. Let us take the example of customer segmentation. Each characteristic will be provided a numeric label, for example: age, gender, location, income, etc. Based on these characteristics, each customer (datapoint) will be assigned a location in the N-dimensional hyperplane. Those customers who are close to each other will constitute a cluster. With this information, a personalized marketing campaign can be devised for each customer group (cluster).



Other examples of use cases include:

-            Products: how many categories of products should an e-commerce site have? By identifying similarities in the products, we can group them in clusters which will form product categories.

-            Recommendation systems: Netflix utilizes user preferences to group users into categories and recommend movies or series to each group.

-            Healthcare: Patient stratification by symptoms or outcomes, where we can group the type of patients to provide personalized attention.

Clustering appears as a major component in the Machine Learning portfolio, where dividing the datasets into groups recognizing patterns in the data appears is key to personalized attention. Now you know how the Netflix recommendation system algorithm works! This is getting interesting!

 

Clustering in Machine Learning - GeeksforGeeks

Clustering in Machine Learning


Sunday, January 18, 2026

Classification Models


Classification Models

 

This roadmap to AI is a blast! Let us now take a look at Classification Models. “Classification in machine learning is a predictive modeling process by which machine learning models use classification algorithms to predict the correct label for input data. A classification model is a type of machine learning model that sorts data points into predefined groups called classes. Classifiers learn class characteristics from input data, then learn to assign possible classes to new unseen data according to those learned characteristics”.  In the previous article, we mentioned Linear Regression as part of an outcome of Supervised Learning Techniques. In this type of Technique, you are labeling data to predict an outcome. In the case of Linear Regression, that outcome will be a number. In a Classification Model, however, the outcome will be a class. For example, you label the data based on whether an image is a cat or not. You provide the model with the image (input), and the output will be a class (Yes or No).

Another popular example is to determine whether an email is spam or not. By labeling emails as spam or not, the machine is trained into recognizing the type of email, by grouping them in folders. The model will be trained until it is able to produce an output that will classify with accuracy in which folder to place the email (spam or not). This is called binary classification. But in a multiclass classification, the model will be able not only to predict whether the image is a cat or not, but also whether it is a cat, dog, bird, etc.

The are different types of classification models, we will name just a few:

-            Logistic Regression: it reflects the relationship between one or several input (independent variables) and an output (dependent variables). In the case of classification models, the output will be a class. Whereas in Linear Regression Regression the relationship was drawn on a straight line, in a Logistic Regression (Classification) the line will simply “classify” the values in groups (the line will swing and depending where is value is located it will belong to one group of the other).

 

-            Decision Tree: in machine learning it is a flowchart-like model used for classification or regression, where data is split into branches based on feature conditions to reach a decision or prediction.

 

 

-            Random Forest: it is an ensemble machine learning method that builds multiple decision trees and combines their outputs to improve accuracy and reduce overfitting in classification or regression tasks.

 

-            Naïve Bayes: it is a supervised machine learning algorithm based on Bayes' Theorem, used mainly for classification tasks. It calculates the probability of each class given the input features and selects the class with the highest probability.

 

 

These are just a few of many formulas to classify label data. As you have probably noticed, Machine Learning is very high on statistics. But the final goal is the same: to predict a class, be it binary or multiclass outcomes. You are probably now beginning to grasp the role of the Data Scientist in Machine Learning. They are experts that can build Linear Regression or Classification Models, training the model to predict the desired outcome. You can now see the infinite applications these models have in real life: Will it rain or not? Will a potential customer default on his loan? It a transaction fraud or legitimate? Based on a patients conditions, which disease are they suffering from? These models have been around for decades, but the rise of AI has made them more accurate enhancing their uses to sky limits! They are already present in our daily life, and constitute the backbone of artificial intelligence!  

 

 

What is Classification in Machine Learning? | IBM 

What Is Logistic Regression? | IBM

Getting started with Classification - GeeksforGeeks

Classification in machine learning: Types and methodologies

The Ultimate Guide to Decision Trees for Machine Learning

10 Must-Know Models for ML Beginners: Random Forest | by Dagang Wei | Medium

Naive Bayes Algorithm: A Simple Guide For Beginners [2025]

 

 


AI – The Awakening

  AI – The Awakening   We live in the most fantastic age mankind has ever experienced. It is an era that will reward a higher level of c...