kiroi.org

KIROI - Artificial Intelligence Return on Invest
The AI strategy for decision-makers and managers

Business excellence for decision-makers & managers by and with Sanjay Sauldie

KIROI - Artificial Intelligence Return on Invest: The AI strategy for decision-makers and managers

KIROI - Artificial Intelligence Return on Invest: The AI strategy for decision-makers and managers

Start » Tooltest: How decision-makers find the truly best AI
20 May 2025

Tooltest: How decision-makers find the truly best AI

4.4
(981)

Choosing the right technological solution today is like navigating an opaque jungle. Countless vendors promise revolutionary results and claim their systems are superior to all others. But how do responsible leaders actually separate the wheat from the chaff? The crucial Tooltest: How decision-makers find the truly best AI doesn't start with superficial comparisons, but with a profound analysis of your own needs and requirements. In this post, you'll learn which criteria really matter and how you can proceed systematically.

Why superficial comparisons can be misleading

Many companies make a fundamental mistake when evaluating new technologies. They rely on glossy brochures and marketing promises, often overlooking the crucial nuances in the process. A system that works brilliantly for a competitor might fail completely within their own organisation. The reasons for this are diverse, ranging from different data structures to divergent process landscapes. This is particularly evident in the realm of data-driven decision-making.

Let's take a medium-sized manufacturing company as an example. This company might need a quality control solution. A financial services provider, on the other hand, is looking for fraud detection systems. And a retail company might want to optimise demand forecasting. Each of these use cases requires completely different skills and algorithms. That is why blanket recommendations regularly fail when faced with the reality of everyday business [1].

The hidden costs of flawed evaluation

Bad decisions made during technology selection can have significant financial consequences. Implementation projects often fail not because of the technology itself, but due to incorrect expectations and insufficient preparation. Clients frequently report failed pilot projects and dashed hopes. These experiences leave deep marks on the company culture, making employees sceptical of future innovation initiatives.

For example, a car parts supplier invested significant sums in a predictive maintenance system. After implementation, it emerged that the solution was not compatible with the existing sensor data. A pharmaceutical company, in turn, opted for a document analysis solution. However, this could not map the specific regulatory requirements. And a logistics service provider acquired a route optimisation system that could not process real-time data.

Best practice with a KIROI customer An internationally operating mechanical engineering company faced the challenge of identifying the right partner for process optimisation from an oversupply of more than fifteen different providers. The company had already undergone two unsuccessful implementation attempts and had become accordingly sceptical. As part of the transruptions coaching support, we jointly developed a structured evaluation framework that took into account the specific requirements of production. We first defined clear success criteria based on actual business objectives, and then created a catalogue of criteria with weighted evaluation dimensions. The integration of the specialist departments into the selection process was particularly important, as this was the only way we could ensure that the technical requirements also met practical needs. After a three-month structured selection process, the decision was made in favour of a medium-sized provider who, although less well-known, exhibited a significantly better fit with the existing technology stack. The subsequent implementation proceeded much more smoothly than previous attempts, and the system now successfully supports quality assurance at several plants.

Tooltest: How decision-makers find the truly best AI through systematic criteria

A professional evaluation process follows a clear structure and considers multiple dimensions. Technical performance is just one aspect among many. Factors such as integration capability, scalability, and vendor stability are equally important. Leaders should systematically work through and weight these dimensions. Only then does a complete picture of the available options emerge [2].

In the manufacturing sector, for example, real-time capability plays a key role. Delays of just a few milliseconds can have a significant impact in a highly automated production environment. In the healthcare sector, on the other hand, data protection and certifications are the main priorities. And in the retail sector, the cost per transaction is often the decisive factor.

To correctly assess the technical dimension.

Technical assessments should never be viewed in isolation. They must always be considered in the context of the specific use cases. A system with impressive benchmark figures can still disappoint in practice. The quality of the training data plays a crucial role. Equally important is the system's ability to handle the company's specific data formats.

For example, an energy supplier must be able to process time-series data from millions of meters. An insurance company, on the other hand, requires the ability to analyse unstructured documents. And a media company may expect advanced content classification capabilities. Each of these use cases demands different technical strengths and architectures.

Tooltest: How decision-makers can find the truly best AI for their specific industry

Industry-specific requirements differ significantly from one another. What is considered a standard feature in one industry can be completely irrelevant in another. Therefore, a careful analysis of industry-standard requirements is recommended. However, this analysis should not stop at standard requirements. Rather, it is important to identify the differentiating factors [3].

Let’s consider three different real-world scenarios. A chemical company requires systems capable of handling process data from various plants. Data quality varies considerably between older and newer production lines. A telecommunications provider, on the other hand, needs to evaluate systems capable of processing enormous volumes of data from network protocols. And a construction company may be looking for solutions for project planning and resource optimisation.

Best practice with a KIROI customer A leading provider of industrial services approached us with a specific challenge: how to compare different providers of analytical tools objectively and fairly. The company had already shortlisted some options internally but was unsure about the validity of the criteria used. As part of our transruptions coaching support, we jointly developed a multi-stage evaluation approach that took both quantitative and qualitative aspects into account. We conducted structured demonstrations with all providers and ensured that each provider used the same test data sets. The involvement of reference customers from comparable industries proved particularly insightful, as their experiences provided valuable insights into the practical performance of the systems. The decision was ultimately made in favour of a provider who, whilst not the cheapest, offered the best combination of technical maturity, industry expertise and support quality. The company now reports a significant improvement in the quality of decision-making within operational processes and a measurable increase in customer satisfaction.

The Human Factor in Technology Assessment

Technology decisions are ultimately made by people. This is why soft factors play at least as important a role as hard metrics. The acceptance of future users determines success or failure. A technically superior system that is rejected by employees provides no added value. Leaders should therefore focus on broad involvement at an early stage.

In a hospital, for example, doctors and nursing staff must be willing to accept and use the new tools. A bank, on the other hand, must ensure that compliance officers can understand and audit the system. And in an engineering firm, technical experts expect technical systems to support their work, not replace it.

Stakeholder Management as a Success Factor

Engaging relevant stakeholders requires careful planning and communication. Different stakeholders have differing perspectives and priorities. The IT department focuses on security and maintainability. The business departments are interested in usability and functionality. And management focuses on costs and strategic alignment [4].

An example from the transport sector illustrates this dynamic. The dispatchers wanted a system that valued their experience and supported them in decision-making. The IT department insisted on seamless integration into the existing system landscape. And management expected demonstrable efficiency gains within the first year.

Practical steps for successful evaluation

A structured evaluation process typically comprises several phases. The first phase involves requirements analysis and market research. The second phase consists of a pre-selection of potential candidates. The third phase includes detailed evaluations and pilot projects. And the fourth phase is where the final decision is made and documented.

For example, a food manufacturer went through such a structured process over six months. A fashion company, on the other hand, opted for an agile approach with several parallel pilot projects. And an administrative operation relied on a particularly transparent process with external support.

Tool review: How decision-makers can identify the very best AI through pilot projects

Pilot projects are an essential element of any serious evaluation. They allow for testing under realistic conditions. This often reveals strengths and weaknesses that remain hidden during demonstrations. The pilot phase should be long enough to achieve meaningful results. At the same time, it should not be too long, so as not to unnecessarily delay the decision-making process [5].

An example from the textile industry demonstrates the importance of well-planned pilots. The company tested three different defect detection systems in parallel. Each system received the same test data and the same support. The results were surprising and refuted several initial assumptions.

Best practice with a KIROI customer A medium-sized retail company with several hundred branches faced the task of selecting a product range optimisation system that could map the complex dependencies between different product groups, seasonal effects and regional differences. The initial shortlist was based on the known market leaders, but as part of our disruptive coaching support, we expanded the pool of candidates to include specialised providers with an industry focus. Together, we developed a set of criteria that, in addition to pure forecast accuracy, also considered factors such as the explainability of recommendations, speed of response to market changes and the quality of the user interface for branch employees. The subsequent pilot phase lasted three months and included ten test branches with different characteristics in order to test the robustness of the systems under various conditions. The result was clear and showed that a less well-known provider offered the best combination of precision and practicality. After full implementation, the company was able to report a measurable improvement in product availability while simultaneously reducing excess stock.

My KIROI Analysis

Selecting the right technological solution remains one of the most demanding tasks for leaders today. My experience from numerous support projects shows that success and failure often hang by a thread. The decisive difference rarely lies in the technology itself, but rather in the quality of the selection process. Companies that proceed systematically and involve all relevant stakeholders achieve significantly better results than those that make hasty decisions.

Particularly important, it seems to me, is the realisation that there is no single best solution for everyone. Each company has its own requirements, strengths, and limitations. A good tool test: This is how decision-makers find the truly best AI takes these individual factors into account and creates an objective framework for evaluation. Transruptions coaching support can provide valuable impetus here by bringing in external perspectives and adapting proven methods.

Clients often report that the structured selection process itself already represents added value. The in-depth examination of their own requirements often leads to new insights into processes and data structures. These insights pay off regardless of the final technology decision. I therefore recommend allocating sufficient time and resources for the evaluation phase and considering it a strategic investment.

Further links from the text above:

[1] Gartner – IT Research and Analysis
[2] McKinsey Digital Insights
[3] Forrester Research
[4] Harvard Business Review – Technology
[5] MIT Research

For more information and if you have any questions, please contact Contact us or read more blog posts on the topic Artificial intelligence here.

How useful was this post?

Click on a star to rate it!

Average rating 4.4 / 5. Vote count: 981

No votes so far! Be the first to rate this post.

Spread the love

Leave a comment