Bringing clarity to chaos in AI

AI seems powerful, but most teams struggle because they can’t define what intelligence they really need. But there are ways to address this challenge.

For anyone working with AI, the blinding speed of growth is a constant struggle. The field never slows down and staying updated becomes a daily task.

I think I can share a personal experience that may help you relate to the effect this type of speed has on all of us. In 2003, let’s say someone bought a bike, registered it, and fixed the license plate with a white background and yellow font. A week later, the government would change the rule: the license plate must now be white with black letters. Two weeks later, the rule would change again. With this constant change, at some point, some people decided to put all possible formats on the same license plate so that, no matter what rule came later, they would be technically covered. This was a common problem for anyone who got their driver’s license during 2003-2005.

I too went through this confusion when I got my license in 2005. And that’s precisely what AI application development feels like today.

If you are an AI developer, you already know the biggest problem. The field is changing so quickly that even a small disruption can set it back exponentially. You return to your computer after a short vacation, and within minutes someone says, “You’re out of date; something’s changed.” That’s how quickly stacks, algorithms, and platforms are evolving.

The transformation itself is not new. The industry has seen the birth of the Internet, cloud computing, mobility, and even earlier phases of AI. But what makes this phase different is that the scope of the AI ​​is not defined. Leaders across industries use the phrase, “everything everywhere.” Unlike the cloud or the Internet, which have clear functional boundaries, AI has no obvious box. What exactly should AI do for a company? Where does it stop? How do we define scope?

This lack of definition is the root of the problem. AI is everywhere, but organizations don’t know what “everywhere” should mean to them.

To solve this, I work with four principles that help create clarity: define, target, scale and grow.

Define

The first principle is to define the level of intelligence required in a solution. We need to be deliberate here because not every problem needs an LLM or a deep learning model. The principle can be approached on three levels.

Good Old Fashioned AI (GOFAI)

At its core, GOFAI is a rules-based system. The logic is found in the code. If the business rule changes, we simply edit it. GOFAI remains extremely useful, practical, and the right answer for many use cases. There is no need to complicate everything.

Machine learning

We use ML when the system needs to learn from patterns instead of rules. This is where training data, predictions, and supervised and unsupervised learning come in.

Complex AI

This includes deep learning, dynamic models, advanced architectures, and LLM. While AI needs to evolve, setting boundaries is equally important. At each stage, simply keep evaluating the question “Is the current level of complexity strictly necessary to solve the customer’s problem?” Often the honest answer is “no.”

Therefore, developers should pause and choose the minimum level of intelligence that meets the requirement. Nothing else. Nothing less.

Aim

The second principle is to approach the business problem in the right way. AI application development is similar to traditional SDLC, but there is an important difference. In software development, we design, code, test, deploy, and maintain. But AI adds one last permanent and mandatory step: refinement. This is not the same as iterative development. Refinement is a continuous loop built into the system itself. Customer and user feedback should be collected, analyzed and fed back to the model regularly. Without refinement, an AI product deteriorates quickly.

This means two things:

  • A feedback mechanism is a non-functional requirement. It should exist within the code or as a direct point of contact with the customer.
  • Developers need a top-down view (from business case to implementation) because AI is no longer an isolated IT delivery. It affects the business model, not just the software.

In previous years, we did not review business cases frequently. In AI, we must. Needs evolve. The models deviate. Use cases change direction. Developers can no longer focus solely on implementation; We need to understand the charter, the model, and the end-to-end refinement cycle.

Scale

The third principle has to do with scale. Traditionally, we designed modules based on functions: module 1 performs function A; module 2 performs function B. The tests followed the same pattern.

But AI development today requires a shift from functional thinking to service-level development. Because? Because now each module calls external services (LLM, API, cloud platforms) not once, but many times in a single workflow. We no longer live in a world where only one or two modules are connected externally. Now, every module does it.

This is where microservices become essential. If your organization uses GPT today and switches to another vendor next year, how will your system adapt? If a model is updated, will it redo its entire codebase?

With microservices, a change to a service can flow throughout the organization without having to rewrite everything. This is the power of thinking in terms of services rather than functions.

Cloud providers have also evolved. Previously, we worked with infrastructure as a service. Today, we rely heavily on the platform-as-a-service model: APIs, machine learning services, and LLM endpoints. Therefore, our architecture must match the service mentality.

Grow

The fourth principle is growth. To move forward responsibly, we must understand where we are. Gartner’s AI maturity model describes five levels:

1. Awareness: Employees understand the basics of AI.

2. Asset: Teams experiment with POCs, hackathons, and simple use cases.

3. Operational: AI improves efficiency through internal operations. This is true for most organizations today.

4. Product: AI powers products delivered to customers with accuracy and reliability.

5. Sentinel: Completely autonomous decision-making systems without human intervention. Autonomous driving is an example.

It is important to note that organizations do not go through these levels sequentially. You cannot finish Level 1 and then start Level 2. Instead, activities at all levels must be done in parallel. For example, a company can run an AI operating system and at the same time create awareness programs for responsible AI. Growth is not linear; It’s layered.

  • Awareness is increased with company-wide training programs, expert conferences and AI meetings.
  • Active exploration comes from hackathons and rapid proofs of concept.
  • Operational impact typically occurs through core teams or centers of excellence that identify use cases and implement them.
  • Product-level trust comes with stronger computing, optimized processes, and model governance.
  • The sentinel level is the long-term vision and represents the highest form of autonomous intelligence.

So why is AI suddenly at an inflection point? We have had forms of AI since the early 20th century. The term itself appeared in 1842 and was formally coined in 1956. We have seen expert systems, rule-based systems, games like Pac-Man and chess, and the first healthcare applications. But the real acceleration began around 2011 with recommendation systems and the rise of machine learning and deep learning.

The current inflection point is driven by four clear trends.

data explosion

We collect data, knowingly or unknowingly, from mobile phones, vehicles, applications and sensors. We have enough data to experiment and now algorithms can also generate synthetic data.

Cloud computing

Previously, computing meant connecting to a mainframe computer somewhere else, often with delays and limitations. Today, a few clicks provide 4GB, 8GB, or 32GB of GPU power instantly. Cloud platforms have made high-end computing accessible to all developers.

Improved algorithms

New AI models are constantly appearing. But the real reason for this speed is that everything is open source. This is no longer controlled by the company. Developers around the world contribute to global progress.

Open contribution

The open source ecosystem is what accelerates this industry. Every upgrade is available to everyone. That’s why the field seems to move at lightning speed.

AI is everywhere. AI is everything. But unless we define our own category, we will feel overwhelmed.

No organization progresses linearly and no developer can afford to work without understanding the business and refinement cycle. AI requires a different way of thinking: structured, strategic and continually evolving.


This article is based on the session titled ‘Beyond Automation: Strategic Integration of AI with GenAI and LLM’ by Pradeeba P., Delivery Manager at Thoughtworks, at AI DevCon in Bengaluru. It has been transcribed and curated by Apurba Sen, Senior Journalist at EFY Group.

Latest Update