A pragmatic approach to generative AI
Generative AI's value hinges on engineering and deep domain knowledge
Eighteen months into the generative AI boom, some may wonder if the shine is wearing off. In April, Axios called gen AI a “solution in search of a problem.” A month later, a Gartner survey across the U.S., U.K. and Germany found that about half of all respondents expressed difficulty assessing AI’s organizational value despite generative solutions being the No. 1 form of deployment. Meanwhile, Apple and Meta are reportedly withholding key AI features from Europe over compliance concerns.
Between the regulatory hangups and ROI questions, it’s tempting to wonder whether generative AI may turn out to be the tech industry’s latest bauble– more NFT than Netflix, if you will. But the problem isn’t the technology; it’s the mindset. What we need is an alternative approach.
Not all AI is the same. We have a bandwagon problem with companies jumping on the AI train, particularly for generative use cases. Practitioners will only unlock the true potential of AI – including generative applications – when they prioritize an engineering-first mindset and cultivate the expertise to add domain knowledge. Then, and only then, can we build a roadmap for concrete, long-term value.
Chief Sales Officer, Virtusa.
Not all AI is the same
Broadly speaking, Enterprise AI splits into generative and analytical applications. Generative has received all the recent attention thanks to its uncanny ability to create written content, computer code, realistic images, and even video in response to user prompts. AI for analytics meanwhile, has been commercialized for far longer. It’s the AI that enterprises use to help run operations, drawing trends and informing decisions based on large pools of data.
Analytical and generative AI can overlap, of course. Within a given stack, you might find all sorts of integrated usages – a generative solution on the front end, for example, that surfaces ‘traditional’ AI-powered analytics to provide data visualization for the answer. Still, the two sides are fundamentally different. Analytics AI helps you operate. It’s reactive. Generative AI helps you create. It’s proactive.
Too many stakeholders gloss over this bifurcation, but it matters in the all-important value conversation. AI-powered analytics have long proven their ROI. They make sense of how we assemble data, and the outputs – from customer segmentation to predictive maintenance to supply-chain optimization – drive well-established value.
Generative AI? That’s a different ballgame. We see lots of experimentation and capex, but not necessarily commensurate output. A firm’s engineers might be 30% more effective by using a generative AI tool to write code, for example, but if that doesn’t drive shorter product-to-market cycles or higher net-promoter scores, then it’s difficult to quantify real value. Leaders need to break the value chain into its modular components and ask the hard questions to map generative use cases to real value.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The bandwagon problem
The ROI problem for gen AI is just as much a bandwagon problem, with many stakeholders starting their search for an AI solution with only a generative implementation in mind. Business leaders are trying to force AI – and generative solutions especially – onto problems they don’t have. They’re inventing use cases just to get in the game, often at the urging of their boards, because they don’t want to be left behind.
It’s time to take a step back. Leaders need to remember two things.
First, it’s important to separate the use cases. Is this push for a generative solution best served by an analytical one, either in whole or in part? Often an organization just needs pure-play AI – for fraud detection or risk management, for example – and not a GPT that turns it into the latest prompt wizard.
Second, it’s just as important to integrate AI only where it makes sense. It should solve acute problems that the business can realize value by solving. Otherwise, it represents a solution without a problem. You gave the orchestra drums for an arrangement with no percussion.
Why domain knowledge is key
Bandwagon skeptics who appreciate the nuances of AI can adopt a pragmatic approach that delivers honest value by taking an engineering-first perspective. The biggest problem with AI, whether generative or analytical, is a lack of understanding for the context or business domain that practitioners are working in.
You can generate a block of code, but without an understanding of where that code fits, you can’t solve any challenges. Consider an analogy: An enterprise might have let an AI model onto its street, but the engineers know the neighborhood. The firm needs to invest significant resources into training its latest resident. After all, it’s there to solve an acute problem, not to just go knocking on every door.
Done correctly, generative models can deliver substantive long-term value. AI can generate code against considerable requirements and context – guardrails built as part of a broader investment in domain knowledge – while engineers have the context to tweak and debug the outputs. It can accelerate productivity, make practitioners’ jobs easier and, if clearly mapped to the value chain, drive quantifiable ROI.
That’s why it’s essential to have the discipline to invest in this domain knowledge from the outset. Leaders need to build that into any AI investment plan if they want useful, long-term results. Sacrificing depth for speed can drive patchy solutions that don’t ultimately help, or only help for a short amount of time. Those who want AI for the long haul need to invest the effort to build context from the bottom up.
A roadmap for discipline
For business leaders, the roadmap to value-driven AI starts by asking the right question: What problem in my enterprise do I really need AI to solve? Disciplined practitioners bring an engineering mindset that asks the right questions, considers deeper problems and seeks targeted solutions from the very start. Done right, analytical or generative AI can accelerate a team’s effectiveness by removing the mundane, boring parts of their roles. But the generative intelligence must have proper guidelines and industry-specific training, lest the implementations stray from their lanes.
Approached this way, gen AI won’t go the way of the metaverse. Its primitive beginnings can mature from superficial use cases to actual value because enterprises invested the resources to build context. If not, the cost of failure is already becoming clear. Firms will pile up additional computing, storage and network costs, only to find that they haven’t delivered any determinable cost savings or revenue gains.
But for those who adopt an engineering mindset and don’t take shortcuts, this alternative approach can indeed deliver. A pragmatic approach to AI starts by asking the right questions and committing to an investment of domain knowledge. It ends with targeted solutions that deliver quantifiable long-term value.
We list the best Large Language Models (LLMs) for coding.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://anngonsaigon.site/news/submit-your-story-to-techradar-pro
Chief Sales Officer, Virtusa.