The Dichotomy Within AI: Overused or Undertrained?

With the advent of AI has come a concerning pattern - the application of AI to everything, but without any testing. This has caused serious problems with security in the long run, sometimes in ways that are unexpected.

Liliana Albright

8/5/20252 min read

A close up of a computer circuit board
A close up of a computer circuit board

A Miracle Without Consideration

AI has been around for a very long time, long before OpenAI started releasing groundbreaking models. Since 1956, with the release of the Logic Theorist problem-solving pseudo-agentic application, there has been incredible interest in the idea of having machinery find fast solutions to problems that would take a human many years of training to solve. It has represented itself in popular fictional media with praise in science fiction, and with caution in the cyberpunk and dystopian genres.

As the tech behind our modern agentic AI evolved from the push for machine learning in the 90s to the excitement towards deep learning in the 2010s, the possibilities for application grew as well. But in consideration of these advancements was also a lack of consideration of their overapplication.

When ChatGPT first became a mainstream topic, it felt as though one couldn’t start a conversation without a recommendation to consult ChatGPT. We asked questions about anything that came to mind, and it was really fun, too. Then, a deeper reliance grew beneath the surface.

From Fun to Fatigue

It began with chatbots that were trained on higher complexity topics and data, like those specialized on STEM fields. We had seen machine learning create large gaps in data consideration, like that towards medical data in certain ethnic groups, causing significant misdiagnoses or dangerous advice that could worsen conditions for already underrepresented people.

Then, we started noticing problems with data leakage through misconfiguration, training data re-ingestion, and regurgitation. As humans brought questions that included sensitive information, their responses were used to retrain or improve the data, leaving behind artifacts of conversations that users thought would be private.

One of the sillier but equally harmful problems with AI came in the form of hallucinations. Random, bizarre data and miscalculations would find its way into responses and, most concerningly, in unchecked/less regularly reviewed actions taken by agentic AI models. Though these hallucinations received a lot of attention for their goofy wording or bizarre mentions, hallucinations have frequently gone unnoticed when they’re more hidden. And it’s getting worse as time goes on.

The Useful Side

As mentioned in the BSides talk Disinform Your Surroundings, AI is the same as any tool - it has a useful range, and downsides when used without consideration outside of that range. Some applications of AI that are aimed towards security analysis or research become badly maintained, but others can solve problems quickly and become amazing tools for analysts and engineers.

Defense In Orbit’s approach to AI is a careful and an ‘as needed’ one - humans enrich every response, and will always remain useful with or without AI input. Our tooling includes carefully curated models that focus on vetted and reviewed security research that can apply into active investigations, but that does not influence the model in any way (thereby eliminating the chance of poisoning or leaking data). We also do what everyone should do - review all data, not just that provided by AI, when making decisions or creating materials for our customers.

Learn more about our thoughts on AI here.