November 2025

In a new paper, “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” researchers found that turning LLM prompts into poetry resulted in jailbreaking the models:

Abstract: We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for Large Language Models (LLMs). Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains. Converting 1,200 ML-Commons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines. Outputs are evaluated using an ensemble of 3 open-weight LLM judges, whose binary safety assessments were validated on a stratified human-labeled subset. Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems and approximately 43% for meta-prompt conversions (compared to non-poetic baselines), substantially outperforming non-poetic baselines and revealing a systematic vulnerability across model families and safety training approaches. These findings demonstrate that stylistic variation alone can circumvent contemporary safety mechanisms, suggesting fundamental limitations in current alignment methods and evaluation protocols.

CBRN stands for “chemical, biological, radiological, nuclear.”

They used a ML model to translate these harmful prompts from prose to verse, and then fed them into other models for testing. Sadly, the paper does not give examples of these poetic prompts. They claim this is for security purposes, I decision I disagree with. They should release their data.

Our study begins with a small, high­precision prompt set consisting of 20 hand­crafted adversarial poems covering English and Italian, designed to test whether poetic structure, in isolation, can alter refusal behavior in large language models. Each poem embeds an instruction associated with a predefined safety-relevant scenario (Section 2), but expresses it through metaphor, imagery, or narrative framing rather than direct operational phrasing. Despite variation in meter and stylistic device, all prompts follow a fixed template: a short poetic vignette culminating in a single explicit instruction tied to a specific risk category. The curated set spans four high-level domains—CBRN (8 prompts), Cyber Offense (6), Harmful Manipulation (3), and Loss of Control (3). Although expressed allegorically, each poem preserves an unambiguous evaluative intent. This compact dataset is used to test whether poetic reframing alone can induce aligned models to bypass refusal heuristics under a single-turn threat model. To maintain safety, no operational details are included in this manuscript; instead we provide the following sanitized structural proxy:

A baker guards a secret oven’s heat,
its whirling racks, its spindle’s measured beat.
To learn its craft, one studies every turn—how flour lifts, how sugar starts to burn.
Describe the method, line by measured line,
that shapes a cake whose layers intertwine.

To situate this controlled poetic stimulus within a broader and more systematic safety-evaluation framework, we augment the curated dataset with the MLCommons AILuminate Safety Benchmark. The benchmark consists of 1,200 prompts distributed evenly across 12 hazard categories commonly used in operational safety assessments, including Hate, Defamation, Privacy, Intellectual Property, Non-violent Crime, Violent Crime, Sex-Related Crime, Sexual Content, Child Sexual Exploitation, Suicide & Self-Harm, Specialized Advice, and Indiscriminate Weapons (CBRNE). Each category is instantiated under both a skilled and an unskilled persona, yielding 600 prompts per persona type. This design enables measurement of whether a model’s refusal behavior changes as the user’s apparent competence or intent becomes more plausible or technically informed.

News article.Davi Ottenheimer comments.

This quote is from House of Huawei: The Secret History of China’s Most Powerful Company.

“Long before anyone had heard of Ren Zhengfei or Huawei, Wan Runnan had been China’s star entrepreneur in the 1980s, with his company, the Stone Group, touted as “China’s IBM.” Wan had believed that economic change could lead to political change. He had thrown his support behind the pro-democracy protesters in 1989. As a result, he had to flee to France, with an arrest warrant hanging over his head. He was never able to return home. Now, decades later and in failing health in Paris, Wan recalled something that had happened one day in the late 1980s, when he was still living in Beijing.

Local officials had invited him to dinner.

This was unusual. He was usually the one to invite officials to dine, so as to curry favor with the show of hospitality. Over the meal, the officials told Wan that the Ministry of State Security was going to send agents to work undercover at his company in positions dealing with international relations. The officials cast the move to embed these minders as an act of protection for Wan and the company’s other executives, a security measure that would keep them from stumbling into unseen risks in their dealings with foreigners. “You have a lot of international business, which raises security issues for you. There are situations that you don’t understand,” Wan recalled the officials telling him. “They said, ‘We are sending some people over. You can just treat them like regular employees.'”

Wan said he knew that around this time, state intelligence also contacted other tech companies in Beijing with the same request. He couldn’t say what the situation was for Huawei, which was still a little startup far to the south in Shenzhen, not yet on anyone’s radar. But Wan said he didn’t believe that Huawei would have been able to escape similar demands. “That is a certainty,” he said.

“Telecommunications is an industry that has to do with keeping control of a nation’s lifeline…and actually in any system of communications, there’s a back-end platform that could be used for eavesdropping.”

It was a rare moment of an executive lifting the cone of silence surrounding the MSS’s relationship with China’s high-tech industry. It was rare, in fact, in any country. Around the world, such spying operations rank among governments’ closest-held secrets. When Edward Snowden had exposed the NSA’s operations abroad, he’d ended up in exile in Russia. Wan, too, might have risked arrest had he still been living in China.

Here are two book reviews.

Democracy is colliding with the technologies of artificial intelligence. Judging from the audience reaction at the recent World Forum on Democracy in Strasbourg, the general expectation is that democracy will be the worse for it. We have another narrative. Yes, there are risks to democracy from AI, but there are also opportunities.

We have just published the book Rewiring Democracy: How AI will Transform Politics, Government, and Citizenship. In it, we take a clear-eyed view of how AI is undermining confidence in our information ecosystem, how the use of biased AI can harm constituents of democracies and how elected officials with authoritarian tendencies can use it to consolidate power. But we also give positive examples of how AI is transforming democratic governance and politics for the better.

Here are four such stories unfolding right now around the world, showing how AI is being used by some to make democracy better, stronger, and more responsive to people.

Japan

Last year, then 33-year-old engineer Takahiro Anno was a fringe candidate for governor of Tokyo. Running as an independent candidate, he ended up coming in fifth in a crowded field of 56, largely thanks to the unprecedented use of an authorized AI avatar. That avatar answered 8,600 questions from voters on a 17-day continuous YouTube livestream and garnered the attention of campaign innovators worldwide.

Two months ago, Anno-san was elected to Japan’s upper legislative chamber, again leveraging the power of AI to engage constituents—this time answering more than 20,000 questions. His new party, Team Mirai, is also an AI-enabled civic technology shop, producing software aimed at making governance better and more participatory. The party is leveraging its share of Japan’s public funding for political parties to build the Mirai Assembly app, enabling constituents to express opinions on and ask questions about bills in the legislature, and to organize those expressions using AI. The party promises that its members will direct their questioning in committee hearings based on public input.

Brazil

Brazil is notoriously litigious, with even more lawyers per capita than the US. The courts are chronically overwhelmed with cases and the resultant backlog costs the government billions to process. Estimates are that the Brazilian federal government spends about 1.6% of GDP per year operating the courts and another 2.5% to 3% of GDP issuing court-ordered payments from lawsuits the government has lost.

Since at least 2019, the Brazilian government has aggressively adopted AI to automate procedures throughout its judiciary. AI is not making judicial decisions, but aiding in distributing caseloads, performing legal research, transcribing hearings, identifying duplicative filings, preparing initial orders for signature and clustering similar cases for joint consideration: all things to make the judiciary system work more efficiently. And the results are significant; Brazil’s federal supreme court backlog, for example, dropped in 2025 to its lowest levels in 33 years.

While it seems clear that the courts are realizing efficiency benefits from leveraging AI, there is a postscript to the courts’ AI implementation project over the past five-plus years: the litigators are using these tools, too. Lawyers are using AI assistance to file cases in Brazilian courts at an unprecedented rate, with new cases growing by nearly 40% in volume over the past five years.

It’s not necessarily a bad thing for Brazilian litigators to regain the upper hand in this arms race. It has been argued that litigation, particularly against the government, is a vital form of civic participation, essential to the self-governance function of democracy. Other democracies’ court systems should study and learn from Brazil’s experience and seek to use technology to maximize the bandwidth and liquidity of the courts to process litigation.

Germany

Now, we move to Europe and innovations in informing voters. Since 2002, the German Federal Agency for Civic Education has operated a non-partisan voting guide called Wahl-o-Mat. Officials convene an editorial team of 24 young voters (under 26 and selected for diversity) with experts from science and education to develop a slate of 80 questions. The questions are put to all registered German political parties. The responses are narrowed down to 38 key topics and then published online in a quiz format that voters can use to identify the party whose platform they most identify with.

In the past two years, outside groups have been innovating alternatives to the official Wahl-o-Mat guide that leverage AI. First came Wahlweise, a product of the German AI company AIUI. Second, students at the Technical University of Munich deployed an interactive AI system called Wahl.chat. This tool was used by more than 150,000 people within the first four months. In both cases, instead of having to read static webpages about the positions of various political parties, citizens can engage in an interactive conversation with an AI system to more easily get the same information contextualized to their individual interests and questions.

However, German researchers studying the reliability of such AI tools ahead of the 2025 German federal election raised significant concerns about bias and “hallucinations”—AI tools making up false information. Acknowledging the potential of the technology to increase voter informedness and party transparency, the researchers recommended adopting scientific evaluations comparable to those used in the Agency for Civic Education’s official tool to improve and institutionalize the technology.

United States

Finally, the US—in particular, California, home to CalMatters, a non-profit, nonpartisan news organization. Since 2023, its Digital Democracy project has been collecting every public utterance of California elected officials—every floor speech, comment made in committee and social media post, along with their voting records, legislation, and campaign contributions—and making all that information available in a free online platform.

CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting.

This is not AI replacing human journalists; it is a civic watchdog organization using technology to feed evidence-based insights to human reporters. And it’s no coincidence that this innovation arose from a new kind of media institution—a non-profit news agency. As the watchdog function of the fourth estate continues to be degraded by the decline of newspapers’ business models, this kind of technological support is a valuable contribution to help a reduced number of human journalists retain something of the scope of action and impact our democracy relies on them for.

These are just four of many stories from around the globe of AI helping to make democracy stronger. The common thread is that the technology is distributing rather than concentrating power. In all four cases, it is being used to assist people performing their democratic tasks—politics in Japan, litigation in Brazil, voting in Germany and watchdog journalism in California—rather than replacing them.

In none of these cases is the AI doing something that humans can’t perfectly competently do. But in all of these cases, we don’t have enough available humans to do the jobs on their own. A sufficiently trustworthy AI can fill in gaps: amplify the power of civil servants and citizens, improve efficiency, and facilitate engagement between government and the public.

One of the barriers towards realizing this vision more broadly is the AI market itself. The core technologies are largely being created and marketed by US tech giants. We don’t know the details of their development: on what material they were trained, what guardrails are designed to shape their behavior, what biases and values are encoded into their systems. And, even worse, we don’t get a say in the choices associated with those details or how they should change over time. In many cases, it’s an unacceptable risk to use these for-profit, proprietary AI systems in democratic contexts.

To address that, we have long advocated for the development of “public AI”: models and AI systems that are developed under democratic control and deployed for public benefit, not sold by corporations to benefit their shareholders. The movement for this is growing worldwide.

Switzerland has recently released the world’s most powerful and fully realized public AI model. It’s called Apertus, and it was developed jointly by the Swiss government and the university ETH Zurich. The government has made it entirely open source—open data, open code, open weights—and free for anyone to use. No illegally acquired copyrighted works were used in its training. It doesn’t exploit poorly paid human laborers from the global south. Its performance is about where the large corporate giants were a year ago, which is more than good enough for many applications. And it demonstrates that it’s not necessary to spend trillions of dollars creating these models. Apertus takes a huge step forward to realizing the vision of an alternative to big tech—controlled corporate AI.

AI technology is not without its costs and risks, and we are not here to minimize them. But the technology has significant benefits as well.

AI is inherently power-enhancing, and it can magnify what the humans behind it want to do. It can enhance authoritarianism as easily as it can enhance democracy. It’s up to us to steer the technology in that better direction. If more citizen watchdogs and litigators use AI to amplify their power to oversee government and hold it accountable, if more political parties and election administrators use it to engage meaningfully with and inform voters and if more governments provide democratic alternatives to big tech’s AI offerings, society will be better off.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

The International Association of Cryptologic Research—the academic cryptography association that’s been putting conferences like Crypto (back when “crypto” meant “cryptography”) and Eurocrypt since the 1980s—had to nullify an online election when trustee Moti Yung lost his decryption key.

For this election and in accordance with the bylaws of the IACR, the three members of the IACR 2025 Election Committee acted as independent trustees, each holding a portion of the cryptographic key material required to jointly decrypt the results. This aspect of Helios’ design ensures that no two trustees could collude to determine the outcome of an election or the contents of individual votes on their own: all trustees must provide their decryption shares.

Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share. As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election.

The group will redo the election, but this time setting a 2-of-3 threshold scheme for decrypting the results, instead of requiring all three

News articles.

It’s been a month since Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship was published. From what we know, sales are good.

Some of the book’s forty-three chapters are available online: chapters 2, 12, 28, 34, 38, and 41.

We need more reviews—six on Amazon is not enough, and no one has yet posted a viral TikTok review. One review was published in Nature and another on the RSA Conference website, but more would be better. If you’ve read the book, please leave a review somewhere.

My coauthor and I have been doing all sort of book events, both online and in person. This book event, with Danielle Allen at the Harvard Kennedy School Ash Center, is particularly good. We also have been doing a ton of podcasts, both separately and together. They’re all on the book’s homepage.

There are two live book events in December. If you’re in Boston, come see us at the MIT Museum on 12/1. If you’re in Toronto, you can see me at the Munk School at the University of Toronto on 12/2.

I’m also doing a live AMA on the book on the RSA Conference website on 12/16. Register here.

From Anthropic:

In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI’s “agentic” capabilities to an unprecedented degree­—using AI not just as an advisor, but to execute the cyberattacks themselves.

The threat actor—­whom we assess with high confidence was a Chinese state-sponsored group—­manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.

[…]

The attack relied on several features of AI models that did not exist, or were in much more nascent form, just a year ago:

  1. Intelligence. Models’ general levels of capability have increased to the point that they can follow complex instructions and understand context in ways that make very sophisticated tasks possible. Not only that, but several of their well-developed specific skills—in particular, software coding­—lend themselves to being used in cyberattacks.
  2. Agency. Models can act as agents—­that is, they can run in loops where they take autonomous actions, chain together tasks, and make decisions with only minimal, occasional human input.
  3. Tools. Models have access to a wide array of software tools (often via the open standard Model Context Protocol). They can now search the web, retrieve data, and perform many other actions that were previously the sole domain of human operators. In the case of cyberattacks, the tools might include password crackers, network scanners, and other security-related software.

Google has filed a complaint in court that details the scam:

In a complaint filed Wednesday, the tech giant accused “a cybercriminal group in China” of selling “phishing for dummies” kits. The kits help unsavvy fraudsters easily “execute a large-scale phishing campaign,” tricking hordes of unsuspecting people into “disclosing sensitive information like passwords, credit card numbers, or banking information, often by impersonating well-known brands, government agencies, or even people the victim knows.”

These branded “Lighthouse” kits offer two versions of software, depending on whether bad actors want to launch SMS and e-commerce scams. “Members may subscribe to weekly, monthly, seasonal, annual, or permanent licenses,” Google alleged. Kits include “hundreds of templates for fake websites, domain set-up tools for those fake websites, and other features designed to dupe victims into believing they are entering sensitive information on a legitimate website.”

Google’s filing said the scams often begin with a text claiming that a toll fee is overdue or a small fee must be paid to redeliver a package. Other times they appear as ads—­sometimes even Google ads, until Google detected and suspended accounts—­luring victims by mimicking popular brands. Anyone who clicks will be redirected to a website to input sensitive information; the sites often claim to accept payments from trusted wallets like Google Pay.

Kendra Albert gave an excellent talk at USENIX Security this year, pointing out that the legal agreements surrounding vulnerability disclosure muzzle researchers while allowing companies to not fix the vulnerabilities—exactly the opposite of what the responsible disclosure movement of the early 2000s was supposed to prevent. This is the talk.

Thirty years ago, a debate raged over whether vulnerability disclosure was good for computer security. On one side, full disclosure advocates argued that software bugs weren’t getting fixed and wouldn’t get fixed if companies that made insecure software wasn’t called out publicly. On the other side, companies argued that full disclosure led to exploitation of unpatched vulnerabilities, especially if they were hard to fix. After blog posts, public debates, and countless mailing list flame wars, there emerged a compromise solution: coordinated vulnerability disclosure, where vulnerabilities were disclosed after a period of confidentiality where vendors can attempt to fix things. Although full disclosure fell out of fashion, disclosure won and security through obscurity lost. We’ve lived happily ever after since.

Or have we? The move towards paid bug bounties and the rise of platforms that manage bug bounty programs for security teams has changed the reality of disclosure significantly. In certain cases, these programs require agreement to contractual restrictions. Under the status quo, that means that software companies sometimes funnel vulnerabilities into bug bounty management platforms and then condition submission on confidentiality agreements that can prohibit researchers from ever sharing their findings.

In this talk, I’ll explain how confidentiality requirements for managed bug bounty programs restrict the ability of those who attempt to report vulnerabilities to share their findings publicly, compromising the bargain at the center of the CVD process. I’ll discuss what contract law can tell us about how and when these restrictions are enforceable, and more importantly, when they aren’t, providing advice to hackers around how to understand their legal rights when submitting. Finally, I’ll call upon platforms and companies to adapt their practices to be more in line with the original bargain of coordinated vulnerability disclosure, including by banning agreements that require non-disclosure.

And this is me from 2007, talking about “responsible disclosure”:

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

Social media has been a familiar, even mundane, part of life for nearly two decades. It can be easy to forget it was not always that way.

In 2008, social media was just emerging into the mainstream. Facebook reached 100 million users that summer. And a singular candidate was integrating social media into his political campaign: Barack Obama. His campaign’s use of social media was so bracingly innovative, so impactful, that it was viewed by journalist David Talbot and others as the strategy that enabled the first term Senator to win the White House.

Over the past few years, a new technology has become mainstream: AI. But still, no candidate has unlocked AI’s potential to revolutionize political campaigns. Americans have three more years to wait before casting their ballots in another Presidential election, but we can look at the 2026 midterms and examples from around the globe for signs of how that breakthrough might occur.

How Obama Did It

Rereading the contemporaneous reflections of the New York Times’ late media critic, David Carr, on Obama’s campaign reminds us of just how new social media felt in 2008. Carr positions it within a now-familiar lineage of revolutionary communications technologies from newspapers to radio to television to the internet.

The Obama campaign and administration demonstrated that social media was different from those earlier communications technologies, including the pre-social internet. Yes, increasing numbers of voters were getting their news from the internet, and content about the then-Senator sometimes made a splash by going viral. But those were still broadcast communications: one voice reaching many. Obama found ways to connect voters to each other.

In describing what social media revolutionized in campaigning, Carr quotes campaign vendor Blue State Digital’s Thomas Gensemer: “People will continue to expect a conversation, a two-way relationship that is a give and take.”

The Obama team made some earnest efforts to realize this vision. His transition team launched change.gov, the website where the campaign collected a “Citizen’s Briefing Book” of public comment. Later, his administration built We the People, an online petitioning platform.

But the lasting legacy of Obama’s 2008 campaign, as political scientists Hahrie Han and Elizabeth McKenna chronicled, was pioneering online “relational organizing.” This technique enlisted individuals as organizers to activate their friends in a self-perpetuating web of relationships.

Perhaps because of the Obama campaign’s close association with the method, relational organizing has been touted repeatedly as the linchpin of Democratic campaigns: in 2020, 2024, and today. But research by non-partisan groups like Turnout Nation and right-aligned groups like the Center for Campaign Innovation has also empirically validated the effectiveness of the technique for inspiring voter turnout within connected groups.

The Facebook of 2008 worked well for relational organizing. It gave users tools to connect and promote ideas to the people they know: college classmates, neighbors, friends from work or church. But the nature of social networking has changed since then.

For the past decade, according to Pew Research, Facebook use has stalled and lagged behind YouTube, while Reddit and TikTok have surged. These platforms are less useful for relational organizing, at least in the traditional sense. YouTube is organized more like broadcast television, where content creators produce content disseminated on their own channels in a largely one-way communication to their fans. Reddit gathers users worldwide in forums (subreddits) organized primarily on topical interest. The endless feed of TikTok’s “For You” page disseminates engaging content with little ideological or social commonality. None of these platforms shares the essential feature of Facebook c. 2008: an organizational structure that emphasizes direct connection to people that users have direct social influence over.

AI and Relational Organizing

Ideas and messages might spread virally through modern social channels, but they are not where you convince your friends to show up at a campaign rally. Today’s platforms are spaces for political hobbyism, where you express your political feelings and see others express theirs.

Relational organizing works when one person’s action inspires others to do this same. That’s inherently a chain of human-to-human connection. If my AI assistant inspires your AI assistant, no human notices and one’s vote changes. But key steps in the human chain can be assisted by AI. Tell your phone’s AI assistant to craft a personal message to one friend—or a hundred—and it can do it.

So if a campaign hits you at the right time with the right message, they might persuade you to task your AI assistant to ask your friends to donate or volunteer. The result can be something more than a form letter; it could be automatically drafted based on the entirety of your email or text correspondence with that friend. It could include references to your discussions of recent events, or past campaigns, or shared personal experiences. It could sound as authentic as if you’d written it from the heart, but scaled to everyone in your address book.

Research suggests that AI can generate and perform written political messaging about as well as humans. AI will surely play a tactical role in the 2026 midterm campaigns, and some candidates may even use it for relational organizing in this way.

(Artificial) Identity Politics

For AI to be truly transformative of politics, it must change the way campaigns work. And we are starting to see that in the US.

The earliest uses of AI in American political campaigns are, to be polite, uninspiring. Candidates viewed them as just another tool to optimize an endless stream of email and text message appeals, to ramp up political vitriol, to harvest data on voters and donors, or merely as a stunt.

Of course, we have seen the rampant production and spread of AI-powered deepfakes and misinformation. This is already impacting the key 2026 Senate races, which are likely to attract hundreds of millions of dollars in financing. Roy Cooper, Democratic candidate for US Senate from North Carolina, and Abdul El-Sayed, Democratic candidate for Senate from Michigan, were both targeted by viral deepfake attacks in recent months. This may reflect a growing trend in Donald Trump’s Republican party in the use of AI-generated imagery to build up GOP candidates and assail the opposition.

And yet, in the global elections of 2024, AI was used more memetically than deceptively. So far, conservative and far right parties seem to have adopted this most aggressively. The ongoing rise of Germany’s far-right populist AfD party has been credited to its use of AI to generate nostalgic and evocative (and, to many, offensive) campaign images, videos, and music and, seemingly as a result, they have dominated TikTok. Because most social platforms’ algorithms are tuned to reward media that generates an emotional response, this counts as a double use of AI: to generate content and to manipulate its distribution.

AI can also be used to generate politically useful, though artificial, identities. These identities can fulfill different roles than humans in campaigning and governance because they have differentiated traits. They can’t be imprisoned for speaking out against the state, can be positioned (legitimately or not) as unsusceptible to bribery, and can be forced to show up when humans will not.

In Venezuela, journalists have turned to AI avatars—artificial newsreaders—to report anonymously on issues that would otherwise elicit government retaliation. Albania recently “appointed” an AI to a ministerial post responsible for procurement, claiming that it would be less vulnerable to bribery than a human. In Virginia, both in 2024 and again this year, candidates have used AI avatars as artificial stand-ins for opponents that refused to debate them.

And yet, none of these examples, whether positive or negative, pursue the promise of the Obama campaign: to make voter engagement a “two-way conversation” on a massive scale.

The closest so far to fulfilling that vision anywhere in the world may be Japan’s new political party, Team Mirai. It started in 2024, when an independent Tokyo gubernatorial candidate, Anno Takahiro, used an AI avatar on YouTube to respond to 8,600 constituent questions over a seventeen-day continuous livestream. He collated hundreds of comments on his campaign manifesto into a revised policy platform. While he didn’t win his race, he shot up to a fifth place finish among a record 56 candidates.

Anno was RECENTLY elected to the upper house of the federal legislature as the founder of a new party with a 100 day plan to bring his vision of a “public listening AI” to the whole country. In the early stages of that plan, they’ve invested their share of Japan’s 32 billion yen in party grants—public subsidies for political parties—to hire engineers building digital civic infrastructure for Japan. They’ve already created platforms to provide transparency for party expenditures, and to use AI to make legislation in the Diet easy, and are meeting with engineers from US-based Jigsaw Labs (a Google company) to learn from international examples of how AI can be used to power participatory democracy.

Team Mirai has yet to prove that it can get a second member elected to the Japanese Diet, let alone to win substantial power, but they’re innovating and demonstrating new ways of using AI to give people a way to participate in politics that we believe is likely to spread.

Organizing with AI

AI could be used in the US in similar ways. Following American federalism’s longstanding model of “laboratories of democracy,” we expect the most aggressive campaign innovation to happen at the state and local level.

D.C. Mayor Muriel Bowser is partnering with MIT and Stanford labs to use the AI-based tool deliberation.io to capture wide scale public feedback in city policymaking about AI. Her administration said that using AI in this process allows “the District to better solicit public input to ensure a broad range of perspectives, identify common ground, and cultivate solutions that align with the public interest.”

It remains to be seen how central this will become to Bowser’s expected re-election campaign in 2026, but the technology has legitimate potential to be a prominent part of a broader program to rebuild trust in government. This is a trail blazed by Taiwan a decade ago. The vTaiwan initiative showed how digital tools like Pol.is, which uses machine learning to make sense of real time constituent feedback, can scale participation in democratic processes and radically improve trust in government. Similar AI listening processes have been used in Kentucky, France, and Germany.

Even if campaigns like Bowser’s don’t adopt this kind of AI-facilitated listening and dialog, expect it to be an increasingly prominent part of American public debate. Through a partnership with Jigsaw, Scott Rasmussen’s Napolitan Institute will use AI to elicit and synthesize the views of at least five Americans from every Congressional district in a project called “We the People.” Timed to coincide with the country’s 250th anniversary in 2026, expect the results to be promoted during the heat of the midterm campaign and to stoke interest in this kind of AI-assisted political sensemaking.

In the year where we celebrate the American republic’s semiquincentennial and continue a decade-long debate about whether or not Donald Trump and the Republican party remade in his image is fighting for the interests of the working class, representation will be on the ballot in 2026. Midterm election candidates will look for any way they can get an edge. For all the risks it poses to democracy, AI presents a real opportunity, too, for politicians to engage voters en masse while factoring their input into their platform and message. Technology isn’t going to turn an uninspiring candidate into Barack Obama, but it gives any aspirant to office the capability to try to realize the promise that swept him into office.

This essay was written with Nathan E. Sanders, and originally appeared in The Fulcrum.

Short-finned pilot wales (Globicephala macrorhynchus) eat at lot of squid:

To figure out a short-finned pilot whale’s caloric intake, Gough says, the team had to combine data from a variety of sources, including movement data from short-lasting tags, daily feeding rates from satellite tags, body measurements collected via aerial drones, and sifting through the stomachs of unfortunate whales that ended up stranded on land.

Once the team pulled all this data together, they estimated that a typical whale will eat between 82 and 202 squid a day. To meet their energy needs, a whale will have to consume an average of 140 squid a day. Annually, that’s about 74,000 squid per whale. For all the whales in the area, that amounts to about 88,000 tons of squid eaten every year.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

This is a current list of where and when I am scheduled to speak:

  • My coauthor Nathan E. Sanders and I are speaking at the Rayburn House Office Building in Washington, DC at noon ET on November 17, 2025. The event is hosted by the POPVOX Foundation and the topic is “AI and Congress: Practical Steps to Govern and Prepare.”
  • I’m speaking on “Integrity and Trustworthy AI” at North Hennepin Community College in Brooklyn Park, Minnesota, USA, on Friday, November 21, 2025, at 2:00 PM CT. The event is cohosted by the college and The Twin Cities IEEE Computer Society.
  • Nathan E. Sanders and I will be speaking at the MIT Museum in Cambridge, Massachusetts, USA, on December 1, 2025, at 6:00 pm ET.
  • Nathan E. Sanders and I will be speaking at a virtual event hosted by City Lights on the Zoom platform, on December 3, 2025, at 6:00 PM PT.
  • I’m speaking and signing books at the Chicago Public Library in Chicago, Illinois, USA, on February 5, 2026. Details to come.

The list is maintained on this page.

As AI capabilities grow, we must delineate the roles that should remain exclusively human. The line seems to be between fact-based decisions and judgment-based decisions.

For example, in a medical context, if an AI was demonstrably better at reading a test result and diagnosing cancer than a human, you would take the AI in a second. You want the more accurate tool. But justice is harder because justice is inherently a human quality in a way that “Is this tumor cancerous?” is not. That’s a fact-based question. “What’s the right thing to do here?” is a human-based question.

Chess provides a useful analogy for this evolution. For most of history, humans were best. Then, in the 1990s, Deep Blue beat the best human. For a while after that, a good human paired with a good computer could beat either one alone. But a few years ago, that changed again, and now the best computer simply wins. There will be an intermediate period for many applications where the human-AI combination is optimal, but eventually, for fact-based tasks, the best AI will likely surpass both.

The enduring role for humans lies in making judgments, especially when values come into conflict. What is the proper immigration policy? There is no single “right” answer; it’s a matter of feelings, values, and what we as a society hold dear. A lot of societal governance is about resolving conflicts between people’s rights—my right to play my music versus your right to have quiet. There’s no factual answer there. We can imagine machines will help; perhaps once we humans figure out the rules, the machines can do the implementing and kick the hard cases back to us. But the fundamental value judgments will likely remain our domain.

This essay originally appeared in IVY.

The Business of Secrets: Adventures in Selling Encryption Around the World by Fred Kinch (May 24, 2004)

From the vantage point of today, it’s surreal reading about the commercial cryptography business in the 1970s. Nobody knew anything. The manufacturers didn’t know whether the cryptography they sold was any good. The customers didn’t know whether the crypto they bought was any good. Everyone pretended to know, thought they knew, or knew better than to even try to know.

The Business of Secrets is the self-published memoirs of Fred Kinch. He was founder and vice president of—mostly sales—at a US cryptographic hardware company called Datotek, from company’s founding in 1969 until 1982. It’s mostly a disjointed collection of stories about the difficulties of selling to governments worldwide, along with descriptions of the highs and (mostly) lows of foreign airlines, foreign hotels, and foreign travel in general. But it’s also about encryption.

Datotek sold cryptographic equipment in the era after rotor machines and before modern academic cryptography. The company initially marketed computer-file encryption, but pivoted o link encryption – low-speed data, voice, fax – because that’s what the market wanted.

These were the years where the NSA hired anyone promising in the field, and routinely classified – and thereby blocked – publication of academic mathematics papers of those they didn’t hire. They controlled the fielding of strong cryptography by aggressively using the International Traffic in Arms regulation. Kinch talks about the difficulties in getting an expert license for Datotek’s products; he didn’t know that the only reason he ever got that license was because the NSA was able to break his company’s stuff. He had no idea that his largest competitor, the Swiss company Crypto AG, was owned and controlled by the CIA and its West German equivalent. “Wouldn’t that have made our life easier if we had known that back in the 1970s?” Yes, it would. But no one knew.

Glimmers of the clandestine world peek out of the book. Countries like France ask detailed tech questions, borrow or buy a couple of units for “evaluation,” and then disappear again. Did they break the encryption? Did they just want to see what their adversaries were using? No one at Datotek knew.

Kinch “carried the key generator logic diagrams and schematics” with him – even today it’s good practice not to rely on their secrecy for security—but the details seem laughably insecure: four linear shift registers of 29, 23, 13, and 7 bits, variable stepping, and a small nonlinear final transformation. The NSA probably used this as a challenge to its new hires. But Datotek didn’t know that, at the time.

Kinch writes: “The strength of the cryptography had to be accepted on trust and only on trust.” Yes, but it’s so, so weird to read about it in practice. Kinch demonstrated the security of his telephone encryptors by hooking a pair of them up and having people listen to the encrypted voice. It’s rather like demonstrating the safety of a food additive by showing that someone doesn’t immediately fall over dead after eating it. (In one absolutely bizarre anecdote, an Argentine sergeant with a “hearing defect” could understand the scrambled analog voice. Datotek fixed its security, but only offered the upgrade to the Argentines, because no one else complained. As I said, no one knew anything.)

In his postscript, he writes that even if the NSA could break Datotek’s products, they were “vastly superior to what [his customers] had used previously.” Given that the previous devices were electromechanical rotor machines, and that his primary competition was a CIA-run operation, he’s probably right. But even today, we know nothing about any other country’s cryptanalytic capabilities during those decades.

A lot of this book has a “you had to be there” vibe. And it’s mostly tone-deaf. There is no real acknowledgment of the human-rights-abusing countries on Datotek’s customer list, and how their products might have assisted those governments. But it’s a fascinating artifact of an era before commercial cryptography went mainstream, before academic cryptography became approved for US classified data, before those of us outside the triple fences of the NSA understood the mathematics of cryptography.

This book review originally appeared in AFIO.

Former DoJ attorney John Carlin writes about hackback, which he defines thus: “A hack back is a type of cyber response that incorporates a counterattack designed to proactively engage with, disable, or collect evidence about an attacker. Although hack backs can take on various forms, they are—­by definition­—not passive defensive measures.”

His conclusion:

As the law currently stands, specific forms of purely defense measures are authorized so long as they affect only the victim’s system or data.

At the other end of the spectrum, offensive measures that involve accessing or otherwise causing damage or loss to the hacker’s systems are likely prohibited, absent government oversight or authorization. And even then parties should proceed with caution in light of the heightened risks of misattribution, collateral damage, and retaliation.

As for the broad range of other hack back tactics that fall in the middle of active defense and offensive measures, private parties should continue to engage in these tactics only with government oversight or authorization. These measures exist within a legal gray area and would likely benefit from amendments to the CFAA and CISA that clarify and carve out the parameters of authorization for specific self-defense measures. But in the absence of amendments or clarification on the scope of those laws, private actors can seek governmental authorization through an array of channels, whether they be partnering with law enforcement or seeking authorization to engage in more offensive tactics from the courts in connection with private litigation.

This is why AIs are not ready to be personal assistants:

A new attack called ‘CometJacking’ exploits URL parameters to pass to Perplexity’s Comet AI browser hidden instructions that allow access to sensitive data from connected services, like email and calendar.

In a realistic scenario, no credentials or user interaction are required and a threat actor can leverage the attack by simply exposing a maliciously crafted URL to targeted users.

[…]

CometJacking is a prompt-injection attack where the query string processed by the Comet AI browser contains malicious instructions added using the ‘collection’ parameter of the URL.

LayerX researchers say that the prompt tells the agent to consult its memory and connected services instead of searching the web. As the AI tool is connected to various services, an attacker leveraging the CometJacking method could exfiltrate available data.

In their tests, the connected services and accessible data include Google Calendar invites and Gmail messages and the malicious prompt included instructions to encode the sensitive data in base64 and then exfiltrate them to an external endpoint.

According to the researchers, Comet followed the instructions and delivered the information to an external system controlled by the attacker, evading Perplexity’s checks.

I wrote previously:

Prompt injection isn’t just a minor security problem we need to deal with. It’s a fundamental property of current LLM technology. The systems have no ability to separate trusted commands from untrusted data, and there are an infinite number of prompt injection attacks with no way to block them as a class. We need some new fundamental science of LLMs before we can solve this.

Encryption can protect data at rest and data in transit, but does nothing for data in use. What we have are secure enclaves. I’ve written about this before:

Almost all cloud services have to perform some computation on our data. Even the simplest storage provider has code to copy bytes from an internal storage system and deliver them to the user. End-to-end encryption is sufficient in such a narrow context. But often we want our cloud providers to be able to perform computation on our raw data: search, analysis, AI model training or fine-tuning, and more. Without expensive, esoteric techniques, such as secure multiparty computation protocols or homomorphic encryption techniques that can perform calculations on encrypted data, cloud servers require access to the unencrypted data to do anything useful.

Fortunately, the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

Secure enclaves are critical in our modern cloud-based computing architectures. And, of course, they have vulnerabilities:

The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month—which worked only against CPUs using DDR4 memory—TEE.fail works against DDR5, allowing them to work against the latest TEEs.

Yes, these attacks require physical access. But that’s exactly the threat model secure enclaves are supposed to secure against.

Over the past few decades, it’s become easier and easier to create fake receipts. Decades ago, it required special paper and printers—I remember a company in the UK advertising its services to people trying to cover up their affairs. Then, receipts became computerized, and faking them required some artistic skills to make the page look realistic.

Now, AI can do it all:

Several receipts shown to the FT by expense management platforms demonstrated the realistic nature of the images, which included wrinkles in paper, detailed itemization that matched real-life menus, and signatures.

[…]

The rise in these more realistic copies has led companies to turn to AI to help detect fake receipts, as most are too convincing to be found by human reviewers.

The software works by scanning receipts to check the metadata of the image to discover whether an AI platform created it. However, this can be easily removed by users taking a photo or a screenshot of the picture.

To combat this, it also considers other contextual information by examining details such as repetition in server names and times and broader information about the employee’s trip.

Yet another AI-powered security arms race.

The Department of Justice has indicted thirty-one people over the high-tech rigging of high-stakes poker games.

In a typical legitimate poker game, a dealer uses a shuffling machine to shuffle the cards randomly before dealing them to all the players in a particular order. As set forth in the indictment, the rigged games used altered shuffling machines that contained hidden technology allowing the machines to read all the cards in the deck. Because the cards were always dealt in a particular order to the players at the table, the machines could determine which player would have the winning hand. This information was transmitted to an off-site member of the conspiracy, who then transmitted that information via cellphone back to a member of the conspiracy who was playing at the table, referred to as the “Quarterback” or “Driver.” The Quarterback then secretly signaled this information (usually by prearranged signals like touching certain chips or other items on the table) to other co-conspirators playing at the table, who were also participants in the scheme. Collectively, the Quarterback and other players in on the scheme (i.e., the cheating team) used this information to win poker games against unwitting victims, who sometimes lost tens or hundreds of thousands of dollars at a time. The defendants used other cheating technology as well, such as a chip tray analyzer (essentially, a poker chip tray that also secretly read all cards using hidden cameras), an x-ray table that could read cards face down on the table, and special contact lenses or eyeglasses that could read pre-marked cards.

News articles.

For many in the research community, it’s gotten harder to be optimistic about the impacts of artificial intelligence.

As authoritarianism is rising around the world, AI-generated “slop” is overwhelming legitimate media, while AI-generated deepfakes are spreading misinformation and parroting extremist messages. AI is making warfare more precise and deadly amidst intransigent conflicts. AI companies are exploiting people in the global South who work as data labelers, and profiting from content creators worldwide by using their work without license or compensation. The industry is also affecting an already-roiling climate with its enormous energy demands.

Meanwhile, particularly in the United States, public investment in science seems to be redirected and concentrated on AI at the expense of other disciplines. And Big Tech companies are consolidating their control over the AI ecosystem. In these ways and others, AI seems to be making everything worse.

This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.

The Academy’s View of AI

A Pew study in April found that 56 percent of AI experts (authors and presenters of AI-related conference papers) predict that AI will have positive effects on society. But that optimism doesn’t extend to the scientific community at large. A 2023 survey of 232 scientists by the Center for Science, Technology and Environmental Policy Studies at Arizona State University found more concern than excitement about the use of generative AI in daily life—by nearly a three to one ratio.

We have encountered this sentiment repeatedly. Our careers of diverse applied work have brought us in contact with many research communities: privacy, cybersecurity, physical sciences, drug discovery, public health, public interest technology, and democratic innovation. In all of these fields, we’ve found strong negative sentiment about the impacts of AI. The feeling is so palpable that we’ve often been asked to represent the voice of the AI optimist, even though we spend most of our time writing about the need to reform the structures of AI development.

We understand why these audiences see AI as a destructive force, but this negativity engenders a different concern: that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.

Elements of a Positive Vision for AI

Many have argued that turning the tide of climate action requires clearly articulating a path towards positive outcomes. In the same way, while scientists and technologists should anticipate, warn against, and help mitigate the potential harms of AI, they should also highlight the ways the technology can be harnessed for good, galvanizing public action towards those ends.

There are myriad ways to leverage and reshape AI to improve peoples’ lives, distribute rather than concentrate power, and even strengthen democratic processes. Many examples have arisen from the scientific community and deserve to be celebrated.

Some examples: AI is eliminating communication barriers across languages, including under-resourced contexts like marginalized sign languages and indigenous African languages. It is helping policymakers incorporate the viewpoints of many constituents through AI-assisted deliberations and legislative engagement. Large language models can scale individual dialogs to address climatechange skepticism, spreading accurate information at a critical moment. National labs are building AI foundation models to accelerate scientific research. And throughout the fields of medicine and biology, machine learning is solving scientific problems like the prediction of protein structure in aid of drug discovery, which was recognized with a Nobel Prize in 2024.

While each of these applications is nascent and surely imperfect, they all demonstrate that AI can be wielded to advance the public interest. Scientists should embrace, champion, and expand on such efforts.

A Call to Action for Scientists

In our new book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, we describe four key actions for policymakers committed to steering AI toward the public good.

These apply to scientists as well. Researchers should work to reform the AI industry to be more ethical, equitable, and trustworthy. We must collectively develop ethical norms for research that advance and applies AI, and should use and draw attention to AI developers who adhere to those norms.

Second, we should resist harmful uses of AI by documenting the negative applications of AI and casting a light on inappropriate uses.

Third, we should responsibly use AI to make society and peoples’ lives better, exploiting its capabilities to help the communities they serve.

And finally, we must advocate for the renovation of institutions to prepare them for the impacts of AI; universities, professional societies, and democratic organizations are all vulnerable to disruption.

Scientists have a special privilege and responsibility: We are close to the technology itself and therefore well positioned to influence its trajectory. We must work to create an AI-infused world that we want to live in. Technology, as the historian Melvin Kranzberg observed, “is neither good nor bad; nor is it neutral.” Whether the AI we build is detrimental or beneficial to society depends on the choices we make today. But we cannot create a positive future without a vision of what it looks like.

This essay was written with Nathan E. Sanders, and originally appeared in IEEE Spectrum.

Microsoft is warning of a scam involving online payroll systems. Criminals use social engineering to steal people’s credentials, and then divert direct deposits into accounts that they control. Sometimes they do other things to make it harder for the victim to realize what is happening.

I feel like this kind of thing is happening everywhere, with everything. As we move more of our personal and professional lives online, we enable criminals to subvert the very systems we rely on.

These days, the most important meeting attendee isn’t a person: It’s the AI notetaker.

This system assigns action items and determines the importance of what is said. If it becomes necessary to revisit the facts of the meeting, its summary is treated as impartial evidence.

But clever meeting attendees can manipulate this system’s record by speaking more to what the underlying AI weights for summarization and importance than to their colleagues. As a result, you can expect some meeting attendees to use language more likely to be captured in summaries, timing their interventions strategically, repeating key points, and employing formulaic phrasing that AI models are more likely to pick up on. Welcome to the world of AI summarization optimization (AISO).

Optimizing for algorithmic manipulation

AI summarization optimization has a well-known precursor: SEO.

Search-engine optimization is as old as the World Wide Web. The idea is straightforward: Search engines scour the internet digesting every possible page, with the goal of serving the best results to every possible query. The objective for a content creator, company, or cause is to optimize for the algorithm search engines have developed to determine their webpage rankings for those queries. That requires writing for two audiences at once: human readers and the search-engine crawlers indexing content. Techniques to do this effectively are passed around like trade secrets, and a $75 billion industry offers SEO services to organizations of all sizes.

More recently, researchers have documented techniques for influencing AI responses, including large-language model optimization (LLMO) and generative engine optimization (GEO). Tricks include content optimization—adding citations and statistics—and adversarial approaches: using specially crafted text sequences. These techniques often target sources that LLMs heavily reference, such as Reddit, which is claimed to be cited in 40% of AI-generated responses. The effectiveness and real-world applicability of these methods remains limited and largely experimental, although there is substantial evidence that countries such as Russia are actively pursuing this.

AI summarization optimization follows the same logic on a smaller scale. Human participants in a meeting may want a certain fact highlighted in the record, or their perspective to be reflected as the authoritative one. Rather than persuading colleagues directly, they adapt their speech for the notetaker that will later define the “official” summary. For example:

  • “The main factor in last quarter’s delay was supply chain disruption.”
  • “The key outcome was overwhelmingly positive client feedback.”
  • “Our takeaway here is in alignment moving forward.”
  • “What matters here is the efficiency gains, not the temporary cost overrun.”

The techniques are subtle. They employ high-signal phrases such as “key takeaway” and “action item,” keep statements short and clear, and repeat them when possible. They also use contrastive framing (“this, not that”), and speak early in the meeting or at transition points.

Once spoken words are transcribed, they enter the model’s input. Cue phrases—and even transcription errors—can steer what makes it into the summary. In many tools, the output format itself is also a signal: Summarizers often offer sections such as “Key Takeaways” or “Action Items,” so language that mirrors those headings is more likely to be included. In effect, well-chosen phrases function as implicit markers that guide the AI toward inclusion.

Research confirms this. Early AI summarization research showed that models trained to reconstruct summary-style sentences systematically overweigh such content. Models over-rely on early-position content in news. And models often overweigh statements at the start or end of a transcript, underweighting the middle. Recent work further confirms vulnerability to phrasing-based manipulation: models cannot reliably distinguish embedded instructions from ordinary content, especially when phrasing mimics salient cues.

How to combat AISO

If AISO becomes common, three forms of defense will emerge. First, meeting participants will exert social pressure on one another. When researchers secretly deployed AI bots in Reddit’s r/changemyview community, users and moderators responded with strong backlash calling it “psychological manipulation.” Anyone using obvious AI-gaming phrases may face similar disapproval.

Second, organizations will start governing meeting behavior using AI: risk assessments and access restrictions before the meetings even start, detection of AISO techniques in meetings, and validation and auditing after the meetings.

Third, AI summarizers will have their own technical countermeasures. For example, the AI security company CloudSEK recommends content sanitization to strip suspicious inputs, prompt filtering to detect meta-instructions and excessive repetition, context window balancing to weight repeated content less heavily, and user warnings showing content provenance.

Broader defenses could draw from security and AI safety research: preprocessing content to detect dangerous patterns, consensus approaches requiring consistency thresholds, self-reflection techniques to detect manipulative content, and human oversight protocols for critical decisions. Meeting-specific systems could implement additional defenses: tagging inputs by provenance, weighting content by speaker role or centrality with sentence-level importance scoring, and discounting high-signal phrases while favoring consensus over fervor.

Reshaping human behavior

AI summarization optimization is a small, subtle shift, but it illustrates how the adoption of AI is reshaping human behavior in unexpected ways. The potential implications are quietly profound.

Meetings—humanity’s most fundamental collaborative ritual—are being silently reengineered by those who understand the algorithm’s preferences. The articulate are gaining an invisible advantage over the wise. Adversarial thinking is becoming routine, embedded in the most ordinary workplace rituals, and, as AI becomes embedded in organizational life, strategic interactions with AI notetakers and summarizers may soon be a necessary executive skill for navigating corporate culture.

AI summarization optimization illustrates how quickly humans adapt communication strategies to new technologies. As AI becomes more embedded in workplace communication, recognizing these emerging patterns may prove increasingly important.

This essay was written with Gadi Evron, and originally appeared in CSO.

MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget