cryptonews

Latest Post

All it takes to poison AI training data is to create a website:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

These things are not trustworthy, and yet they are going to be widely trusted.

Politicians fixate on the global race for technological supremacy between US and China. They debate geopolitical implications of chip exports, latest model releases from each country, and military applications of AI. Someday, they believe, we might see advancements in AI tip the scales in a superpower conflict.

But the most important arms race of the 21st century is already happening elsewhere and, while AI is definitely the weapon of choice, combatants are distributed across dozens of domains.

Academic journals are flooded with AI-generated papers, and are turning to AI to help review submissions. Brazil’s court system started using AI to triage cases, only to face an increasing volume of cases filed with AI help. Open source software developers are being overwhelmed with code contributions from bots. Newspapers, music, social media, education, investigative journalism, hiring, and procurement are all being disrupted by a massive expansion of AI use.

Each of these is an arms race. Adversaries within a system iteratively seeking an edge against their competition by continuously expanding their use of a common technology.

Beneficiaries of these arms races are US mega-corporations capturing wealth from the rest of us at an unprecedented rate. A substantial fraction of global economy has reoriented around AI in just the past few years, and that trend is accelerating. In parallel, this industry’s lobbying interests are quickly becoming the object, rather than the subject, of US government power.

To understand these arms races, let’s look at an example of particular interest to democracies worldwide: how AI is changing the relationship between democratic government and citizens. Interactions that used to happen between people and elected representatives are expanding to a massive scale, with AIs taking the roles that humans once did.

In a notorious example from 2017, US Federal Communications Commission opened a comment platform on the web to get public input on internet regulation. It was quickly flooded with millions of comments fraudulently orchestrated by broadband providers to oppose FCC regulation of their industry. From the other side, a 19-yearold college student responded by submitting millions of comments of his own supporting the regulation. Both sides were using software primitive by the standards of today’s AI.

Nearly a decade later, it is getting harder for citizens to tell when they’re talking to a government bot, or when an online conversation about public policy is just bots talking to bots. When constituents leverage AI to communicate better, faster, and more, it pressures government officials to do the same.

This may sound futuristic, but it’s become a familiar reality in US. Staff in US Congress are using AI to make their constituent email correspondence more efficient. Politicians campaigning for office are adopting AI tools to automate fundraising and voter outreach. By one 2025 estimate, a fifth of public submissions to the Consumer Financial Protection Bureau were already being generated with AI assistance.

People and organizations are adopting AI here because it solves a real problem that has made mass advocacy campaigns ineffective in the past: quantity has been inversely proportional to both quality and relevance. It’s easy for government agencies to dismiss general comments in favour of more specific and actionable ones. That makes it hard for regular people to make their voices heard. Most of us don’t have the time to learn the specifics or to express ourselves in this kind of detail. AI makes that contextualization and personalization easy. And as the volume and length of constituent comments grow, agencies turn to AI to facilitate review and response.

That’s the arms race. People are using AI to submit comments, which requires those on the receiving end to use AI to wade through the comments received. To the extent that one side does attain an advantage, it will likely be temporary. And yet, there is real harm created when one side exploits another in these adversarial systems. Constituents of democracies lose out if their public servants use AI-generated responses to ignore and dismiss their voices rather than to listen to and include them. Scientific enterprise is weakened if fraudulent papers sloppily generated by AI overwhelm legitimate research.

As we write in our new book, Rewiring Democracy, the arms race dynamic is inevitable. Every actor in an adversarial system is incentivized and, in the absence of new regulation in this fast moving space, free to use new technologies to advance its own interests. Yet some of these examples are heartening. They signal that, even if you face an AI being used against you, there’s an opportunity to use the tech for your own benefit.

But, right now, it’s obvious who is benefiting most from AI. A handful of American Big Tech corps and their owners are extracting trillions of dollars from the manufacture of AI chips, development of AI data centers, and operation of so-called ‘frontier’ AI models. Regardless of which side pulls ahead in each arms race scenario, the house always wins. Corporate AI giants profit from the race dynamic itself.

As formidable as the near-monopoly positions of today’s Big Tech giants may seem, people and governments have substantial capability to fight back. Various democracies are resisting this concentration of wealth and power with tools of anti-trust regulation, protections for human rights, and public alternatives to corporate AI. All of us worried about the AI arms race and committed to preserving the interests of our communities and our democracies should think in both these terms: how to use the tech to our own advantage, and how to resist the concentration of power AI is being exploited to create.

This essay was written with Nathan E. Sanders, and originally appeared in The Times of India.

Good article on password managers that secretly have a backdoor.

New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server­—either administrative or the result of a compromise­—can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext.

This is where I plug my own Password Safe. It isn’t as full-featured as the others and it doesn’t use the cloud at all, but it’s actual encryption with no recovery features.

Interesting:

Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

Part 2 of the story. And a Wall Street Journal article.

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget