Fueled by digital transformation, there appears to be no limit to the heights organizations will reach in the coming years. One of the noteworthy technologies that help companies scale these new heights is artificial intelligence (AI). But as AI advances with numerous use cases, there has been the lingering problem of trust: AI is not yet fully considered by humans. At best, it’s under scrutiny and we’re still a long way from the human-AI synergy that is the dream of data science and artificial intelligence experts.
One of the factors behind this disconnected reality is the complexity of AI. The other is the opaque approach that AI-driven projects often take to problem solving and decision making. To solve this challenge, several business leaders looking to build trust in AI have turned to explainable AI models (also called XAI).
Explainable AI enables IT leaders, especially data scientists and ML engineers, to interrogate, understand and characterize model accuracy and ensure transparency in AI-based decision making.
Why companies are getting on the explainable AI train
With the global size of the explainable AI market estimated to grow from $ 3.5 billion in 2020 to $ 21 billion by 2030, according to a report by ResearchandMarkets, it’s obvious that more companies are now getting on the explainable AI train. Alon Lev, CEO of Israel-based Qwak, a fully managed platform that unifies machine learning (ML) engineering and data operations, told VentureBeat in an interview that this trend “could be directly related to new regulations that require specific industries to provide more transparency on model predictions. ”The growth of explainable AI is based on the need to build trust in AI models, he said.
MetaBeat will bring together thought leaders to provide insights into how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, California.
He also noted that another growing trend in explainable AI is the use of SHapley Additive exPlanations (SHAP) values, which is a game-theoretical approach to explain the outcome of ML models.
“We are seeing that our fintech and healthcare clients are more involved in the topic as they are sometimes required by regulation to explain why a model provided a specific prediction, how the prediction occurred, and what factors were considered. In these specific industries. , we’re seeing more models with built-in explainable AI by default, ”he added.
A growing market with difficult problems to solve
There is no shortage of startups in the AI and MLops space, with a long list of startups developing MLops solutions including Comet, Iterative.ai, ZenML, Landing AI, Domino Data Lab, Weights and Biases, and others. Qwak is another startup in the space that focuses on automating MLops processes and allows companies to manage models as they are integrated with their products.
Purporting to accelerate the potential of MLops using a different approach, Domino Data Lab focuses on building on-premise systems to integrate with cloud-based GPUs as part of Nexus, its enterprise-facing initiative built in partnership with Nvidia as launch partner. ZenML itself offers a framework of tools and infrastructure that serves as a standardization layer and allows data scientists to iterate on promising ideas and build production-ready ML pipelines.
Comet prides itself on the ability to provide a self-hosted, cloud-based MLops solution that enables data scientists and engineers to track, compare and optimize experiments and models. The goal is to provide insights and data to build more accurate AI models, while improving productivity, collaboration and explanation across teams.
In the world of AI development, the most dangerous journey to take is from prototyping to production. Research has shown that most AI projects never go into production, with an 87% failure rate in a cutthroat market. However, this in no way implies that established companies and startups are not successful at riding the wave of AI innovation.
Addressing Qwak’s challenges while deploying its user-explainable ML and AI solutions, Lev said that although Qwak does not build its own ML models, it does provide the tools that allow its customers to train, adapt, test, monitor and produce efficiently the models they build. “The challenge we solve in a nutshell is the dependence of data scientists on engineering activities,” he said.
By shortening the length of model creation by eliminating the underlying fatigue, Lev says Qwak helps both data scientists and engineers continuously deploy ML models and automate the process using its platform.
In a tough market with various competitors, Lev says Qwak is the only MLops / ML engineering platform that covers the entire ML workflow from feature building and data preparation to deploying models in production.
“Our platform is simple to use for both data scientists and engineers, and implementing the platform is as simple as a single line of code. The build system will standardize the structure of your project and help data scientists and ML engineers generate verifiable and re-qualifiable models. In addition, it will automatically version the code, data and parameters of all models, creating distributable artifacts. In addition, his version of the model tracks the disparities between multiple versions, warding off data and conceptual drift. ”
Founded in 2021 by Alon Lev (former Vice President of Data Operations at Payoneer), Yuval Fernbach (former ML Specialist at Amazon), Ran Romano (former Head of ML Data and Engineering at Wix.com) and Lior Credo (former Head of of business development at IronSource), the Qwak team says they have reversed the race and approach to prepare for the explainable AI market.
VentureBeat’s mission it must be a digital town square for technical decision makers to gain insights into transformative business technology and transactions. Discover our Briefings.