Michael Ham
By MARLENE WILDEN
Los Alamos Daily Post
marlene@ladailypost.com
Step into a conversation with Michael Ham, and within minutes you’re ping-ponging between the history of semiconductors, the future of warfare, and the chilling possibility that by 2030, the internet might be more bot than human.
Ham, director of the Mission Data Stewardship Alliance at Los Alamos National Laboratory, was the keynote speaker for the members of the Military Order of the World Wars (MOWW) this month. His presentation, titled “Agentic AI for National Security”, explored the intersection of technological evolution, national defense and digital ethics.
Equal parts scientist, historian and futurist, Ham traced the line between invention and influence—from nanometer technology to the algorithm, from guided missiles to guided misinformation.
“I came to Los Alamos as a physicist doing experiments with neurons on a chip,” Ham told the group. “It didn’t take long to realize that the data itself—the way we collect, preserve and govern it—is the foundation for everything we do in national security.”
At the laboratory, Ham builds frameworks to secure and preserve critical scientific and weapons data. His goal is to create data ecosystems that protect vital research and ensure it remains accessible for future national security needs.
“Good data is what lets us make good decisions,” Ham said. “Without it, even the smartest algorithms are just guessing.”
From Silicon to Sentience
Ham’s talk began with a look back at the semiconductor revolution, drawing parallels between that era and today’s race to dominate artificial intelligence. “Just as silicon manufacturing once determined who led the world, control of AI and energy will define who leads the next era,” he said.
He illustrated the speed of progress with a comparison that made the audience pause.
“When I came to Los Alamos, the Roadrunner supercomputer broke one petaflop in 2008 using 2.5 megawatts of power,” Ham said. “By 2026, Nvidia’s chip will reach 50 petaflops while using just 1,800 watts. That’s like going from a warehouse full of PlayStations glued together to a desktop that thinks faster than a brain.”
He linked that transformation to the battlefield. “Ukraine’s use of autonomous drones and AI-trained targeting systems is historic,” Ham said. “They trained their systems on museum aircraft to recognize exactly where to hit. It’s the first distributed-agency conflict—machines perceiving, planning and acting alongside humans.”
For Ham, the lesson is clear. “The semiconductor was the last arsenal,” he said. “Autonomy is the next.”
From Generative to Agentic
Ham explained that today’s most advanced AI systems are evolving beyond what is called generative AI—which creates text or images—to agentic AI, in which models plan, act and adapt autonomously.
“If GPT-4 was generative, GPT-5 is agentic,” he said. “It doesn’t just respond to your prompt. It decides which models to use, how much power to spend and whether to double-check itself. It’s talking to itself to make decisions.”
Ham said this emerging capability brings enormous potential but also significant risk.
“We’re using systems that can make a decision about which pieces of the large language model to use,” he said. “But, if the data underneath isn’t trustworthy, they’ll make confident mistakes at scale.”
He said that even as AI systems accelerate, their reliability still depends on the integrity of human-curated data. “We can’t expect to have effective AI if our data isn’t well-managed,” Ham said. “That’s where the real advantage lies.”

Agentic AI vs. Generative AI: Core characteristics. Source: WisdomPlexus
The Internet’s Next War: Truth vs. Trust
Ham’s talk eventually moved from national defense to what he called “the information front line”. Drawing from his years participating in online communities such as Reddit, he described how AI-generated content is already altering the texture of digital conversation.
“In 2016, when the troll farms came online, there was a huge vibe shift,” he said.
That observation led to one of the night’s most unsettling topics—the Dead Internet Theory, which predicts that social media could be dominated almost entirely by AI by the end of the decade.
“The theory says that by 2030, social media may be mostly machines talking to machines,” Ham said. “Imagine logging onto Facebook or YouTube and realizing every comment, every like, every photo share or video created is generated by bots optimizing engagement.”
Recent reports support that concern. According to UNILAD Tech, ChatGPT itself predicted that the Dead Internet Theory could “come true” within five to 15 years. Reddit co-founder Alexis Ohanian recently added his voice to the warning, telling Fortune that “so much of the internet is now just dead—whether it’s botted, quasi-AI or LinkedIn slop.”
Ham said this potential flood of synthetic content represents a national security risk of its own.
“It’ll look like your mom calling you to say aliens are invading—and it sounds just like her. That’s what scares me,” he said.
Ham urged listeners to think about information as a new battlefield, where manipulation can spread faster than truth.
“We’ve entered an era where the tactical edge comes from adaptation speed and control of both data and actuation,” he said.
The presentation slide behind him read: “Tomorrow’s high ground isn’t terrain; it’s control of the autonomous coordination.”
The New Arsenal of Democracy
For Ham, the antidote lies in collaboration. At LANL, he serves on the AI Council—20 to 30 scientists and engineers who meet every few weeks to shape responsible AI policies.
“We shouldn’t fear autonomy. We need to align with our human intent,” he said. “We have to pay a lot of attention to the kinds of policies that we put in place that allow innovation, but with some guidelines and ethics—that’s how we build this new arsenal of democracy.”
Ham smiled as he wrapped up his talk, admitting that he’s used ChatGPT to help organize it.
“It’s a good collaborator,” he said with a laugh. “As long as you remember to fact-check your partner.”