Can AI outsmart an Analyst?

We’re constantly told that artificial intelligence can solve a growing host of needs: organizing your emails for you, writing content, or even open-source intelligence research for your project. Running this kind of competitive intelligence may seem like an ideal use of AI tool, such as a large language model (LLM). After all, it’s just gathering online sources, right? In reality, even the most modern AI tools fail can't match KAOH's CI analysts. Let’s look at a few AI use cases and issues that show how vital the personal touch is: 

  1. Stakeholder research

    Let's say you need a list of board or planning commission members for a small township in rural America. While this could easily be a question for ChatGPT or a more research-focused application, many of these tools rely on municipal websites that are rarely updated, frequently sharing outdated information. Furthermore, these tools often confuse municipalities across states (Champaign, Ohio vs. Champaign, Illinois) and confuse counties with municipalities (Bay City, Michigan vs. Bay County, Michigan).

  2. Monitoring community reactions

    AI-powered source aggregators may have success in picking up major stories about a project, but they struggle to access vital social media chatter beyond the headlines. Social media channels often feature content that cannot be accessed without a human analyst’s know-how. Nextdoor neighborhoods, LinkedIn posts and most Twitter or Instagram content remains almost entirely inaccessible to AI-powered source aggregators, along with a significant amount of hard-to-find Facebook content.

  3. Analyzing sentiment and reactions

    CI tools using AI often advertise an ability to accurately monitor sentiment in the sources they find, but we have found that these tools rarely capture a mention’s true meaning and instead rely on a small number of trigger words to assign a positive or negative value. For example, a Facebook post that reads “I would be so happy if the wind turbine developers left town” may be flagged as positive, since the terms “happy” and “wind turbine” appear in close proximity. Many sentiments are more complex than what a tool can identify, and more nuanced viewpoints get overlooked by using a linear scale. Some AI-powered programs claim that they can create actionable analyses, such as community engagement plans, out of a set of media mentions. If available, this analysis might offer a vague overview of reactions to a project without creating a coherent, project-specific plan to address these concerns.

  4. Telling you what you want to hear

    Standard AI tools might focus on creating output that is satisfying to the user rather than output that is accurate. If you ask an LLM for fifty words about professional ice skating in Tahiti, it will most likely answer with generic statements about the sport’s global popularity instead of telling you that a small South Pacific nation has never hosted an ice-skating competition. We see the same type of output when asking an AI tool an open-ended question about stakeholder opinions, popular perceptions or permitting outlook for a project that has not yet been announced. 

We align business goals

with social goals.
© 2025 KAOH Media Enterprises, Inc.
Privacy