Three Key Principles to Accelerate Enterprise AI
Many Australian organisations have a goal to transform their businesses and business models using artificial intelligence.
Recent research by IDC shows that after spending much of last year planning, enterprises are focused on expanding use cases for AI this year, with goals to “improve productivity, simplify operations, automate processes, reduce costs, and provide data-driven insights that enhance decision-making capabilities.” Impressively, 82% of larger enterprises were found to already be using some form of AI or machine learning.
But the research also showed that many enterprises encounter challenges when looking to accelerate their use of AI and trying to integrate it into business operations. This reflects our own experience with assisting Australian organisations.
One of the challenges faced today is getting to grips with the multitude of technology options available. The number of AI and data management tools available is already too many to count, and there are new options emerging every day. The existing choices are also rapidly evolving over time, with the pace of change getting faster every day. The result is a dizzying array of choices, in which it can be hard to determine which path forward best suits your business and its needs.
A second challenge is the unpredictability of infrastructure and compute costs associated with running AI at scale. Most enterprises understand that for AI to be successful, there needs to be enough corporate data available to train the model or to run AI across to generate actionable insights. Enterprises are often choosing to consolidate their data in a centralised structure – a data warehouse or a data lake – and then run AI over that data. This centralisation approach can increase the risk of unpredictable data and compute costs.
A third challenge is rising to the expectations set by their company boards to act. Generative AI has made AI more accessible as a boardroom topic, and directors are feeling the FOMO – fear of missing out. Company boards don’t want to be left behind by the opportunity to leverage AI in all its forms, but particularly generative AI. They want to know how AI might assist them to drive transformation of the enterprise’s business model.
None of these are insurmountable.
Enterprises that are experiencing one or more of these challenges should consider mapping their AI journeys to three key principles that, if implemented, can help them to architect a path to accelerate using AI to generate tangible value.
These principles are: align to business value; integrate and optimise; connect and collect your data. I will now describe each one and what it contributes to the steering of value generation using AI.
Principle #1 Aligned to business value
Having a starting position to apply responsible AI practices. Enterprises need to ground AI solutions in a strong business case – appreciating that with a better understanding of AI capabilities stronger business cases can be created. Pragmatic use cases can be achieved by aligning efforts with existing business processes. For example, an organisation may wish to use AI to improve the speed or accuracy of a decision-making capability such as a home loan application. Or they may want AI to assist in drafting or summarising an important business document, such as a contract or tender response. By aligning the efforts to an existing process that requires optimisation, efforts expended on AI will be much more grounded, because the result is targeted and aligned to an operational need. IDEO’s design thinking framework of desirability, feasibility and viability is a useful tool to focus on value generation.
Principle #2: Integrate and optimise
The second principle to making AI work is being able to integrate and optimise three layers: AI models, data and infrastructure. Doing this accelerates development and deployment, improves technical efficiency and effectiveness, enables scale and reliability, and together these improve ROI.
Integration speaks for itself. Optimising the performance of each layer, and the integration between is where the focus needs to be. This is what drives speed to market, and ultimately user experience and outcomes.
Too often, organisations take a siloed approach to addressing these three layers. A siloed approach to AI model development stands the risk of assuming the required data can be handled at scale, or that the infrastructure can perform or scale to the AI model’s requirements. It also can slow model development down as easily accessible data and suitable infrastructure is needed to provide model developers with the flexibility to explore and innovate.
This approach is evident in leading cloud-based unified architectures offered by hyperscalers, and other cloud-based unified data and analytics platforms. It is important to explore other options that support a hybrid cloud architecture, as Forrester has highlighted in its AI Enterprise Infrastructure research which found that AI is easier using on-premises infrastructure as part of a hybrid cloud infrastructure.
A hybrid model, comprising a mix of public and private cloud, on-premises and edge hardware, can be used to locate compute power as close as possible to all potential data sources. This balances flexibility and control, allowing enterprises to use cloud to scale compute, whilst keeping core and sensitive operations close to the data to improve security, responsiveness and reduce cost.
Principle #3: Connect and collect
We know that AI needs data to train the models. This often leads enterprises to undergo a data transformation program as a precursor to AI, pulling data out of source systems and consolidating it in a central data warehouse or data lake that can then be the foundation for AI use cases. However, these structures can be expensive and time-consuming to establish. They also take a long time to complete, which delays the organisation’s ability to embrace AI opportunities.
Put simply, if it takes a long time to get your data into the data lake, you can’t be agile or responsive, as the business environment or AI options change.
An complementary option, in addition to bringing compute capacity closer to the data, is to use data virtualisation technology to reduce the time-to-data and the time-to-value. Data remains in source systems and is fetched only on a needs basis to power an AI model or capability. Avoiding the requirement to stand up a costly centralised data store can act as a considerable accelerator for enterprise AI.
Legacy IT is most likely not set up to support the development or operations of new AI solutions. It is also possible that your organisation’s data strategy has been secondary and your data quality has room for improvement. And now AI represents a significant challenge, or risk, depending on how you act. Accelerating AI will take leadership. We need to adopt a new mindset on how AI needs to be infused in all aspects of our organisations.
About the author
Petar Bielovich is Director, Data & Analytics for Atturra. He leads a team delivering data, analytics and AI solutions, enabling digital transformation and generating more value from all forms of data. Petar has more than 25 years’ experience working with clients, including Australian Defence, Boral, Telstra and Nestle, and has worked for large professional services organisations and start-ups.