Pending ...

Operationalizing AI: Human Managed & Microsoft Joint Roundtable on AI Strategy for ASEAN

On 22 February 2024, Human Managed and Microsoft Philippines hosted a joint roundtable session with our customers and partners to discuss operationalizing AI in enterprises' day to day processes, within their unique contexts.

This article covers the key takeaways on the 3 'Ops' with the highest impact and value for businesses in the age of AI: DataOps (feeding AI), MLOps (tuning AI), and IntelOps (applying AI).

Why operationalization?

The progress in AI largely spearheaded by tech giants, open source communities and new players has made a new generation of AI-driven products and services accessible to any business that want them. Many are calling it the newest ‘inflection point’ in technology since cloud computing. Pilots, initial development and initial launch of AI apps for specific use cases have never been easier. But adoptable examples of operational AI with reusability, scalability and adaptability — especially in enterprise settings across multiple domains, is still limited.

We wanted to kickstart the conversation on what operationalizing AI in ASEAN businesses really looks like. It was an incredibly engaging session where we focused at length on the enterprise problems to which AI can be applied, particularly the “Day 2” operational ones that keep the business running and generate revenue. Thank you again to our customers and partners for being a part of this journey.

DataOps: Feeding AI

No data = no AI

The first section of DataOps brought the focus to ground reality amidst the overwhelming hype and promises of AI — what working with data everyday to secure and scale a business is like.

I asked our speakers and guests to share openly about “What is the day to day data problem you have in operations?”

IMG_0013.jpg

Their answers could be summarized into 5 buckets:

  1. The visibility problem: It’s difficult to know what you have and to track the changes in your business consistently. Without current and accurate view of your assets, their posture and behaviors, you are running blind.
  2. The verification problem: Data outputs generated by tools and solutions are not always trustable or right. Human’s role is to analyze data critically, apply context and other perspectives. However, verifying data and removing false positives still remains largely manual, repetitive and time consuming.
  3. The foresight problem: Accurately predicting what is likely to happen from data analysis is desirable, but very challenging to achieve, because you need large volumes of reliable data, effective models, and quick feedback loops.
  4. The prioritization problem: With data, events and alerts being generated everywhere, it’s hard to prioritize what is important, and to decide what to do at speed.
  5. The orchestration problem: Bringing together all technologies and tools into cohesive operational pipelines to detect, react, and respond is difficult, especially with the volume and velocity of data keeps on increasing.

MLOps: Tuning AI

No context = no personalized models

After establishing the ground reality from data perspective, we zoomed out to see what the world of opportunities and possibilities is from AI perspective.

In the MLOps section, Jed Cruz, Data & AI Specialist from Microsoft shed light on the fast evolving landscape of AI, powered by huge advancements in foundation models including large language models. Businesses are spoiled for choices when it comes to plug-and-play AI tools and features that solves specific problems or tasks, such as document and meeting summaries, code generation, or natural language chatbot.

IMG_0004.jpg

However, operating AI at scale across multiple use cases and business processes require custom-trained models. This is where businesses can potentially unlock competitive and differentiating value from AI, rather than using a ready-made solution that everyone else has access to. However, this is also where most MLOps and LLMOps challenges occur, such as:

  1. Choice paralysis: with AI developments evolving so quickly, it is difficult to decide what direction to take and what models to use.
  2. Lack of technical expertise: even after identifying the use cases and models for AI application, building the pipelines and productionizing requires deep technical knowledge, not to mention the capacity for experimentation (which will definitely come with failures).
  3. Siloed data flow: typically in an enterprise, data generation and management is siloed, which leads to siloed and limited decisions. To achieve holistically contextualized intel from data, tracking the data lineage at its raw form is important to train the AI models but difficult to achieve, due to the data not being integrated.
  4. No context models: Generic or foundation AI models, no matter how advanced, will not magically produce accurate and precise outputs suited for your unique business context. For machine learning to be operational, AI models must be trained, tuned, and improved with data, logic, and patterns unique to your business.
  5. Feedback loop: AI models only improve with proper feedback, so decisions have to be made as to who will give the feedbacks and the mechanism for it. There is also a responsibility to check for and reduce bias and prejudice.

IntelOps: Applying AI

No data to serving pipelines = no repeatable processes

Even when you make headway with the DataOps and MLOps problems, there is one big piece of the puzzle that is left in operationalized AI: presenting and serving the AI outputs to the right recipient (humans or machines) at the right time. What the outputs of dataops (insights & intel) and MLOps (labels) are in themselves important. What is also important — but often deprioritized — is how they get generated, and how they get delivered and tuned to be usable across the business.

At Human Managed, we call this continuous process of data-to-intel, intel-to-labels, labels-to-serving pipelines: IntelOps. To get the most out of AI in your day to day business processes, we believe it's crucial to build in the specifics of distribution of labor between humans and machines.

  • What Human or AI processes analyze what type of data and use cases? (Logic, model)
  • What Human or AI processes generate outputs (insights, recommendation)?
  • What Human or AI processes do you use to execute actions? (Functions, tasks)
  • Finally, how and where do you present the AI outputs? (API, report, notification)

We shared the ways that HM will be applying AI across all of our services. There is no one mode of AI; it will be different based on the type of implementation, function and the level of automation required.

ai in hm works.png

Specific applications of AI depends on its implementation, function, and degree of automation.

Conclusion: The AI game is one of data, context, and processes

We cannot know what the future of AI holds, but one thing that is for certain is that there will be more data, not less. Everything is a datapoint that could be analyzed if you want it to be. But to what end? The human capacity and process improvements are not increasing at sufficient rate to keep up with the technological developments and expected outcomes, so we need to prioritize the data problems to solve.

Working with data is a given in today’s operations, but many enterprises are far from applying data ops at scale to contextualize and train AI models that is operational across key business processes.

Making data, models, and their outputs operationalized and always production-ready — not just once, but every day — with limited resources is the real challenge.

IMG_0007.jpg

The good news is that it’s not that these problems are impossible to solve — in fact, many of them have been broken down and solved by different ecosystem players and domain experts, bringing new innovative products and services to the enterprise industry.This shifts the technology service and partnership models as we know it, and makes data and AI platform a lot more accessible to enterprise customers than ever before. Just like how cloud computing distributed infrastructure and software, today’s AI developments are breaking down and distributing the AI functions (analytical, generative, model) even further.

The AI game is one of data, context, and processes. We believe that the companies that continuously build the context of their business as distributed and scalable data (instead of tribal knowledge in individual’s minds) and work with distributed ecosystem of partners and suppliers will be the ones that grow with AI instead of constantly playing catch up.

* * *

This article was originally published on LinkedIn on 4 March 2024.