May 2025

There’s a new cybersecurity awareness campaign: Take9. The idea is that people—you, me, everyone—should just pause for nine seconds and think more about the link they are planning to click on, the file they are planning to download, or whatever it is they are planning to share.

There’s a website—of course—and a video, well-produced and scary. But the campaign won’t do much to improve cybersecurity. The advice isn’t reasonable, it won’t make either individuals or nations appreciably safer, and it deflects blame from the real causes of our cyberspace insecurities.

First, the advice is not realistic. A nine-second pause is an eternity in something as routine as using your computer or phone. Try it; use a timer. Then think about how many links you click on and how many things you forward or reply to. Are we pausing for nine seconds after every text message? Every Slack ping? Does the clock reset if someone replies midpause? What about browsing—do we pause before clicking each link, or after every page loads? The logistics quickly become impossible. I doubt they tested the idea on actual users.

Second, it largely won’t help. The industry should know because we tried it a decade ago. “Stop. Think. Connect.” was an awareness campaign from 2016, by the Department of Homeland Security—this was before CISA—and the National Cybersecurity Alliance. The message was basically the same: Stop and think before doing anything online. It didn’t work then, either.

Take9’s website says, “Science says: In stressful situations, wait 10 seconds before responding.” The problem with that is that clicking on a link is not a stressful situation. It’s normal, one that happens hundreds of times a day. Maybe you can train a person to count to 10 before punching someone in a bar but not before opening an attachment.

And there is no basis in science for it. It’s a folk belief, all over the Internet but with no actual research behind it—like the five-second rule when you drop food on the floor. In emotionally charged contexts, most people are already overwhelmed, cognitively taxed, and not functioning in a space where rational interruption works as neatly as this advice suggests.

Pausing Adds Little

Pauses help us break habits. If we are clicking, sharing, linking, downloading, and connecting out of habit, a pause to break that habit works. But the problem here isn’t habit alone. The problem is that people aren’t able to differentiate between something legitimate and an attack.

The Take9 website says that nine seconds is “time enough to make a better decision,” but there’s no use telling people to stop and think if they don’t know what to think about after they’ve stopped. Pause for nine seconds and… do what? Take9 offers no guidance. It presumes people have the cognitive tools to understand the myriad potential attacks and figure out which one of the thousands of Internet actions they take is harmful. If people don’t have the right knowledge, pausing for longer—even a minute—will do nothing to add knowledge.

The three-part suspicion, cognition, and automaticity model (SCAM) is one way to think about this. The first is lack of knowledge—not knowing what’s risky and what isn’t. The second is habits: people doing what they always do. And third, using flawed mental shortcuts, like believing PDFs to be safer than Microsoft Word documents, or that mobile devices are safer than computers for opening suspicious emails.

These pathways don’t always occur in isolation; sometimes they happen together or sequentially. They can influence each other or cancel each other out. For example, a lack of knowledge can lead someone to rely on flawed mental shortcuts, while those same shortcuts can reinforce that lack of knowledge. That’s why meaningful behavioral change requires more than just a pause; it needs cognitive scaffolding and system designs that account for these dynamic interactions.

A successful awareness campaign would do more than tell people to pause. It would guide them through a two-step process. First trigger suspicion, motivating them to look more closely. Then, direct their attention by telling them what to look at and how to evaluate it. When both happen, the person is far more likely to make a better decision.

This means that pauses need to be context specific. Think about email readers that embed warnings like “EXTERNAL: This email is from an address outside your organization” or “You have not received an email from this person before.” Those are specifics, and useful. We could imagine an AI plug-in that warns: “This isn’t how Bruce normally writes.” But of course, there’s an arms race in play; the bad guys will use these systems to figure out how to bypass them.

This is all hard. The old cues aren’t there anymore. Current phishing attacks have evolved from those older Nigerian scams filled with grammar mistakes and typos. Text message, voice, or video scams are even harder to detect. There isn’t enough context in a text message for the system to flag. In voice or video, it’s much harder to trigger suspicion without disrupting the ongoing conversation. And all the false positives, when the system flags a legitimate conversation as a potential scam, work against people’s own intuition. People will just start ignoring their own suspicions, just as most people ignore all sorts of warnings that their computer puts in their way.

Even if we do this all well and correctly, we can’t make people immune to social engineering. Recently, both cyberspace activist Cory Doctorow and security researcher Troy Hunt—two people who you’d expect to be excellent scam detectors—got phished. In both cases, it was just the right message at just the right time.

It’s even worse if you’re a large organization. Security isn’t based on the average employee’s ability to detect a malicious email; it’s based on the worst person’s inability—the weakest link. Even if awareness raises the average, it won’t help enough.

Don’t Place Blame Where It Doesn’t Belong

Finally, all of this is bad public policy. The Take9 campaign tells people that they can stop cyberattacks by taking a pause and making a better decision. What’s not said, but certainly implied, is that if they don’t take that pause and don’t make those better decisions, then they’re to blame when the attack occurs.

That’s simply not true, and its blame-the-user message is one of the worst mistakes our industry makes. Stop trying to fix the user. It’s not the user’s fault if they click on a link and it infects their system. It’s not their fault if they plug in a strange USB drive or ignore a warning message that they can’t understand. It’s not even their fault if they get fooled by a look-alike bank website and lose their money. The problem is that we’ve designed these systems to be so insecure that regular, nontechnical people can’t use them with confidence. We’re using security awareness campaigns to cover up bad system design. Or, as security researcher Angela Sasse first said in 1999: “Users are not the enemy.”

We wouldn’t accept that in other parts of our lives. Imagine Take9 in other contexts. Food service: “Before sitting down at a restaurant, take nine seconds: Look in the kitchen, maybe check the temperature of the cooler, or if the cooks’ hands are clean.” Aviation: “Before boarding a plane, take nine seconds: Look at the engine and cockpit, glance at the plane’s maintenance log, ask the pilots if they feel rested.” This is obviously ridiculous advice. The average person doesn’t have the training or expertise to evaluate restaurant or aircraft safety—and we don’t expect them to. We have laws and regulations in place that allow people to eat at a restaurant or board a plane without worry.

But—we get it—the government isn’t going to step in and regulate the Internet. These insecure systems are what we have. Security awareness training, and the blame-the-user mentality that comes with it, are all we have. So if we want meaningful behavioral change, it needs a lot more than just a pause. It needs cognitive scaffolding and system designs that account for all the dynamic interactions that go into a decision to click, download, or share. And that takes real work—more work than just an ad campaign and a slick video.

This essay was written with Arun Vishwanath, and originally appeared in Dark Reading.

Russia is proposing a rule that all foreigners in Moscow install a tracking app on their phones.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:

  • Residence location
  • Fingerprint
  • Face photograph
  • Real-time geo-location monitoring

This isn’t the first time we’ve seen this. Qatar did it in 2022 around the World Cup:

“After accepting the terms of these apps, moderators will have complete control of users’ devices,” he continued. “All personal content, the ability to edit it, share it, extract it as well as data from other apps on your device is in their hands. Moderators will even have the power to unlock users’ devices remotely.”

One one my biggest worries about VPNs is the amount of trust users need to place in them, and how opaque most of them are about who owns them and what sorts of data they retain.

A new study found that many commercials VPNS are (often surreptitiously) owned by Chinese companies.

It would be hard for U.S. users to avoid the Chinese VPNs. The ownership of many appeared deliberately opaque, with several concealing their structure behind layers of offshore shell companies. TTP was able to determine the Chinese ownership of the 20 VPN apps being offered to Apple’s U.S. users by piecing together corporate documents from around the world. None of those apps clearly disclosed their Chinese ownership.

Interesting story:

USS Stein was underway when her anti-submarine sonar gear suddenly stopped working. On returning to port and putting the ship in a drydock, engineers observed many deep scratches in the sonar dome’s rubber “NOFOUL” coating. In some areas, the coating was described as being shredded, with rips up to four feet long. Large claws were left embedded at the bottom of most of the scratches.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Technology and innovation have transformed every part of society, including our electoral experiences. Campaigns are spending and doing more than at any other time in history. Ever-growing war chests fuel billions of voter contacts every cycle. Campaigns now have better ways of scaling outreach methods and offer volunteers and donors more efficient ways to contribute time and money. Campaign staff have adapted to vast changes in media and social media landscapes, and use data analytics to forecast voter turnout and behavior.

Yet despite these unprecedented investments in mobilizing voters, overall trust in electoral health, democratic institutions, voter satisfaction, and electoral engagement has significantly declined. What might we be missing?

In software development, the concept of user experience (UX) is fundamental to the design of any product or service. It’s a way to think holistically about how a user interacts with technology. It ensures that products and services are built with the users’ actual needs, behaviors, and expectations in mind, as opposed to what developers think users want. UX enables informed decisions based on how the user will interact with the system, leading to improved design, more effective solutions, and increased user satisfaction. Good UX design results in easy, relevant, useful, positive experiences. Bad UX design leads to unhappy users.

This is not how we normally think of elections. Campaigns measure success through short-term outputs—voter contacts, fundraising totals, issue polls, ad impressions—and, ultimately, election results. Rarely do they evaluate how individuals experience this as a singular, messy, democratic process. Each campaign, PAC, nonprofit, and volunteer group may be focused on their own goal, but the voter experiences it all at once. By the time they’re in line to vote, they’ve been hit with a flood of outreach—spammy texts from unfamiliar candidates, organizers with no local ties, clunky voter registration sites, conflicting information, and confusing messages, even from campaigns they support. Political teams can point to data that justifies this barrage, but the effectiveness of voter contact has been steadily declining since 2008. Intuitively, we know this approach has long-term costs. To address this, let’s evaluate the UX of an election cycle from the point of view of the end user, the everyday citizen.

Specifically, how might we define the UX of an election cycle: the voter experience (VX)? A VX lens could help us see the full impact of the electoral cycle from the perspective that matters most: the voters’.

For example, what if we thought about elections in terms of questions like these?

  • How do voters experience an election cycle, from start to finish?
  • How do voters perceive their interactions with political campaigns?
  • What aspects of the election cycle do voters enjoy? What do they dislike? Do citizens currently feel fulfilled by voting?
  • If voters “tune out” of politics, what part of the process has made them want to not pay attention?
  • What experiences decrease the number of eligible citizens who register and vote?
  • Are we able to measure the cumulative impacts of political content interactions over the course of multiple election cycles?
  • Can polls or focus groups help researchers learn about longitudinal sentiment from citizens as they experience multiple election cycles?
  • If so, what would we want to learn in order to bolster democratic participation and trust in institutions?

Thinking in terms of VX can help answer these questions. Moreover, researching and designing around VX could help identify additional metrics, beyond traditional turnout and engagement numbers, that better reflect the collective impact of campaigning: of all those voter contact and persuasion efforts combined.

This isn’t a radically new idea, and earlier efforts to embed UX design into electoral work yielded promising early benefits. In 2020, a coalition of political tech builders created a Volunteer Experience program. The group held design sprints for political tech tools, such as canvassing apps and phone banking sites. Their goal was to apply UX principles to improve the volunteer user flow, enhance data hygiene, and improve volunteer retention. If a few sprints can improve the phone banking experience, imagine the transformative possibilities of taking this lens to the VX as a whole.

If we want democracy to thrive long-term, we need to think beyond short-term wins and table stakes. This isn’t about replacing grassroots organizing or civic action with digital tools. Rather, it’s about learning from UX research methodology to build lasting, meaningful engagement that involves both technology and community organizing. Often, it is indeed local, on-the-ground organizers who have been sounding the alarm about the long-term effects of prioritizing short-term tactics. A VX approach may provide additional data to bolster their arguments.

Learnings from a VX analysis of election cycles could also guide the design of new programs that not only mobilize voters (to contribute, to campaign for their candidates, and to vote), but also ensure that the entire process of voting, post-election follow-up, and broader civic participation is as accessible, intuitive, and fulfilling as possible. Better voter UX will lead to more politically engaged citizens and higher voter turnout.

VX methodology may help combine real-time citizen feedback with centralized decision-making. Moving beyond election cycles, focusing on the citizen UX could accelerate possibilities for citizens to provide real-time feedback, review the performance of elected officials and government, and receive help-desk-style support with the same level of ease as other everyday “products.” By understanding how people engage with civic life over time, we can better design systems for citizens that strengthen participation, trust, and accountability at every level.

Our hope is that this approach, and the new data and metrics uncovered by it, will support shifts that help restore civic participation and strengthen trust in institutions. With citizens oriented as the central users of our democratic systems, we can build new best practices for fulfilling civic infrastructure that foster a more effective and inclusive democracy.

The time for this is now. Despite hard-fought victories and lessons learned from failures, many people working in politics privately acknowledge a hard truth: our current approach isn’t working. Every two years, people build campaigns, mobilize voters, and drive engagement, but they are held back by what they don’t understand about the long-term impact of their efforts. VX thinking can help solve that.

This essay was written with Hillary Lehr, and originally appeared on the Harvard Kennedy School Ash Center’s website.

I already knew about the declining response rate for polls and surveys. The percentage of AI bots that respond to surveys is also increasing.

Solutions are hard:

1. Make surveys less boring.
We need to move past bland, grid-filled surveys and start designing experiences people actually want to complete. That means mobile-first layouts, shorter runtimes, and maybe even a dash of storytelling. TikTok or dating app style surveys wouldn’t be a bad idea or is that just me being too much Gen Z?

2. Bot detection.
There’s a growing toolkit of ways to spot AI-generated responses—using things like response entropy, writing style patterns or even metadata like keystroke timing. Platforms should start integrating these detection tools more widely. Ideally, you introduce an element that only humans can do, e.g., you have to pick up your price somewhere in-person. Btw, note that these bots can easily be designed to find ways around the most common detection tactics such as Captcha’s, timed responses and postcode and IP recognition. Believe me, way less code than you suspect is needed to do this.

3. Pay people more.
If you’re only offering 50 cents for 10 minutes of mental effort, don’t be surprised when your respondent pool consists of AI agents and sleep-deprived gig workers. Smarter, dynamic incentives—especially for underrepresented groups—can make a big difference. Perhaps pay-differentiation (based on simple demand/supply) makes sense?

4. Rethink the whole model.
Surveys aren’t the only way to understand people. We can also learn from digital traces, behavioral data, or administrative records. Think of it as moving from a single snapshot to a fuller, blended picture. Yes, it’s messier—but it’s also more real.

A DoorDash driver stole over $2.5 million over several months:

The driver, Sayee Chaitainya Reddy Devagiri, placed expensive orders from a fraudulent customer account in the DoorDash app. Then, using DoorDash employee credentials, he manually assigned the orders to driver accounts he and the others involved had created. Devagiri would then mark the undelivered orders as complete and prompt DoorDash’s system to pay the driver accounts. Then he’d switch those same orders back to “in process” and do it all over again. Doing this “took less than five minutes, and was repeated hundreds of times for many of the orders,” writes the US Attorney’s Office.

Interesting flaw in the software design. He probably would have gotten away with it if he’d kept the numbers small. It’s only when the amount missing is too big to ignore that the investigations start.

In response to a FOIA request, the NSA released “Fifty Years of Mathematical Cryptanalysis (1937-1987),” by Glenn F. Stahly, with a lot of redactions.

Weirdly, this is the second time the NSA has declassified the document. John Young got a copy in 2019. This one has a few less redactions. And nothing that was provided in 2019 was redacted here.

If you find anything interesting in the document, please tell us about it in the comments.

From Hackaday.com, this is a neural network simulation of a pet squid.

Autonomous Behavior:

  • The squid moves autonomously, making decisions based on his current state (hunger, sleepiness, etc.).
  • Implements a vision cone for food detection, simulating realistic foraging behavior.
  • Neural network can make decisions and form associations.
  • Weights are analysed, tweaked and trained by Hebbian learning algorithm.
  • Experiences from short-term and long-term memory can influence decision-making.
  • Squid can create new neurons in response to his environment (Neurogenesis)

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

This is a weird story:

U.S. energy officials are reassessing the risk posed by Chinese-made devices that play a critical role in renewable energy infrastructure after unexplained communication equipment was found inside some of them, two people familiar with the matter said.

[…]

Over the past nine months, undocumented communication devices, including cellular radios, have also been found in some batteries from multiple Chinese suppliers, one of them said.

Reuters was unable to determine how many solar power inverters and batteries they have looked at.

The rogue components provide additional, undocumented communication channels that could allow firewalls to be circumvented remotely, with potentially catastrophic consequences, the two people said.

The article is short on fact and long on innuendo. Both more details and credible named sources would help a lot here.

On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to “regularly suggest updates” to the law and “accelerate the issuance of legislation by up to 70%.” AI would create a “comprehensive legislative plan” spanning local and federal law and would be connected to public administration, the courts, and global policy trends.

The plan was widely greeted with astonishment. This sort of AI legislating would be a global “first,” with the potential to go “horribly wrong.” Skeptics fear that the AI model will make up facts or fundamentally fail to understand societal tenets such as fair treatment and justice when influencing law.

The truth is, the UAE’s idea of AI-generated law is not really a first and not necessarily terrible.

The first instance of enacted law known to have been written by AI was passed in Porto Alegre, Brazil, in 2023. It was a local ordinance about water meter replacement. Council member Ramiro Rosário was simply looking for help in generating and articulating ideas for solving a policy problem, and ChatGPT did well enough that the bill passed unanimously. We approve of AI assisting humans in this manner, although Rosário should have disclosed that the bill was written by AI before it was voted on.

Brazil was a harbinger but hardly unique. In recent years, there has been a steady stream of attention-seeking politicians at the local and national level introducing bills that they promote as being drafted by AI or letting AI write their speeches for them or even vocalize them in the chamber.

The Emirati proposal is different from those examples in important ways. It promises to be more systemic and less of a one-off stunt. The UAE has promised to spend more than $3 billion to transform into an “AI-native” government by 2027. Time will tell if it is also different in being more hype than reality.

Rather than being a true first, the UAE’s announcement is emblematic of a much wider global trend of legislative bodies integrating AI assistive tools for legislative research, drafting, translation, data processing, and much more. Individual lawmakers have begun turning to AI drafting tools as they traditionally have relied on staffers, interns, or lobbyists. The French government has gone so far as to train its own AI model to assist with legislative tasks.

Even asking AI to comprehensively review and update legislation would not be a first. In 2020, the U.S. state of Ohio began using AI to do wholesale revision of its administrative law. AI’s speed is potentially a good match to this kind of large-scale editorial project; the state’s then-lieutenant governor, Jon Husted, claims it was successful in eliminating 2.2 million words’ worth of unnecessary regulation from Ohio’s code. Now a U.S. senator, Husted has recently proposed to take the same approach to U.S. federal law, with an ideological bent promoting AI as a tool for systematic deregulation.

The dangers of confabulation and inhumanity—while legitimate—aren’t really what makes the potential of AI-generated law novel. Humans make mistakes when writing law, too. Recall that a single typo in a 900-page law nearly brought down the massive U.S. health care reforms of the Affordable Care Act in 2015, before the Supreme Court excused the error. And, distressingly, the citizens and residents of nondemocratic states are already subject to arbitrary and often inhumane laws. (The UAE is a federation of monarchies without direct elections of legislators and with a poor record on political rights and civil liberties, as evaluated by Freedom House.)

The primary concern with using AI in lawmaking is that it will be wielded as a tool by the powerful to advance their own interests. AI may not fundamentally change lawmaking, but its superhuman capabilities have the potential to exacerbate the risks of power concentration.

AI, and technology generally, is often invoked by politicians to give their project a patina of objectivity and rationality, but it doesn’t really do any such thing. As proposed, AI would simply give the UAE’s hereditary rulers new tools to express, enact, and enforce their preferred policies.

Mohammed’s emphasis that a primary benefit of AI will be to make law faster is also misguided. The machine may write the text, but humans will still propose, debate, and vote on the legislation. Drafting is rarely the bottleneck in passing new law. What takes much longer is for humans to amend, horse-trade, and ultimately come to agreement on the content of that legislation—even when that politicking is happening among a small group of monarchic elites.

Rather than expeditiousness, the more important capability offered by AI is sophistication. AI has the potential to make law more complex, tailoring it to a multitude of different scenarios. The combination of AI’s research and drafting speed makes it possible for it to outline legislation governing dozens, even thousands, of special cases for each proposed rule.

But here again, this capability of AI opens the door for the powerful to have their way. AI’s capacity to write complex law would allow the humans directing it to dictate their exacting policy preference for every special case. It could even embed those preferences surreptitiously.

Since time immemorial, legislators have carved out legal loopholes to narrowly cater to special interests. AI will be a powerful tool for authoritarians, lobbyists, and other empowered interests to do this at a greater scale. AI can help automatically produce what political scientist Amy McKay has termed “microlegislation“: loopholes that may be imperceptible to human readers on the page—until their impact is realized in the real world.

But AI can be constrained and directed to distribute power rather than concentrate it. For Emirati residents, the most intriguing possibility of the AI plan is the promise to introduce AI “interactive platforms” where the public can provide input to legislation. In experiments across locales as diverse as KentuckyMassachusetts, FranceScotlandTaiwan, and many others, civil society within democracies are innovating and experimenting with ways to leverage AI to help listen to constituents and construct public policy in a way that best serves diverse stakeholders.

If the UAE is going to build an AI-native government, it should do so for the purpose of empowering people and not machines. AI has real potential to improve deliberation and pluralism in policymaking, and Emirati residents should hold their government accountable to delivering on this promise.

The case is over:

A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.

I’m sure it’ll be appealed. Everything always is.

A Chinese company has developed an AI-piloted submersible that can reach speeds “similar to a destroyer or a US Navy torpedo,” dive “up to 60 metres underwater,” and “remain static for more than a month, like the stealth capabilities of a nuclear submarine.” In case you’re worried about the military applications of this, you can relax because the company says that the submersible is “designated for civilian use” and can “launch research rockets.”

“Research rockets.” Sure.

Reporting on the rise of fake students enrolling in community college courses:

The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, they’ve been almost exclusively impacted by the fraud.

The article talks about the rise of this type of fraud, the difficulty of detecting it, and how it upends quite a bit of the class structure and learning community.

Slashdot thread.

Deepfakes are now mimicking heartbeats

In a nutshell

  • Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats.
  • The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. This challenges many existing detection tools, which may need significant redesigns to keep up with the evolving technology.
  • To effectively identify high-quality deepfakes, researchers suggest shifting focus from just detecting heart rate signals to analyzing how blood flow is distributed across different facial regions, providing a more accurate detection strategy.

And the AI models will start mimicking that.

Sooner or later, it’s going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think it’s worth thinking about the security of that now, while its still a nascent idea.

In 2019, I joined Inrupt, a company that is commercializing Tim Berners-Lee’s open protocol for distributed data ownership. We are working on a digital wallet that can make use of AI in this way. (We used to call it an “active wallet.” Now we’re calling it an “agentic wallet.”)

I talked about this a bit at the RSA Conference earlier this week, in my keynote talk about AI and trust. Any useful AI assistant is going to require a level of access—and therefore trust—that rivals what we currently our email provider, social network, or smartphone.

This Active Wallet is an example of an AI assistant. It’ll combine personal information about you, transactional data that you are a party to, and general information about the world. And use that to answer questions, make predictions, and ultimately act on your behalf. We have demos of this running right now. At least in its early stages. Making it work is going require an extraordinary amount of trust in the system. This requires integrity. Which is why we’re building protections in from the beginning.

Visa is also thinking about this. It just announced a protocol that uses AI to help people make purchasing decisions.

I like Visa’s approach because it’s an AI-agnostic standard. I worry a lot about lock-in and monopolization of this space, so anything that lets people easily switch between AI models is good. And I like that Visa is working with Inrupt so that the data is decentralized as well. Here’s our announcement about its announcement:

This isn’t a new relationship—we’ve been working together for over two years. We’ve conducted a successful POC and now we’re standing up a sandbox inside Visa so merchants, financial institutions and LLM providers can test our Agentic Wallets alongside the rest of Visa’s suite of Intelligent Commerce APIs.

For that matter, we welcome any other company that wants to engage in the world of personal, consented Agentic Commerce to come work with us as well.

I joined Inrupt years ago because I thought that Solid could do for personal data what HTML did for published information. I liked that the protocol was an open standard, and that it distributed data instead of centralizing it. AI agents need decentralized data. “Wallet” is a good metaphor for personal data stores. I’m hoping this is another step towards adoption.

The UK’s National Cyber Security Centre just released its white paper on “Advanced Cryptography,” which it defines as “cryptographic techniques for processing encrypted data, providing enhanced functionality over and above that provided by traditional cryptography.” It includes things like homomorphic encryption, attribute-based encryption, zero-knowledge proofs, and secure multiparty computation.

It’s full of good advice. I especially appreciate this warning:

When deciding whether to use Advanced Cryptography, start with a clear articulation of the problem, and use that to guide the development of an appropriate solution. That is, you should not start with an Advanced Cryptography technique, and then attempt to fit the functionality it provides to the problem.

And:

In almost all cases, it is bad practice for users to design and/or implement their own cryptography; this applies to Advanced Cryptography even more than traditional cryptography because of the complexity of the algorithms. It also applies to writing your own application based on a cryptographic library that implements the Advanced Cryptography primitive operations, because subtle flaws in how they are used can lead to serious security weaknesses.

The conclusion:

Advanced Cryptography covers a range of techniques for protecting sensitive data at rest, in transit and in use. These techniques enable novel applications with different trust relationships between the parties, as compared to traditional cryptographic methods for encryption and authentication.

However, there are a number of factors to consider before deploying a solution based on Advanced Cryptography, including the relative immaturity of the techniques and their implementations, significant computational burdens and slow response times, and the risk of opening up additional cyber attack vectors.

There are initiatives underway to standardise some forms of Advanced Cryptography, and the efficiency of implementations is continually improving. While many data processing problems can be solved with traditional cryptography (which will usually lead to a simpler, lower-cost and more mature solution) for those that cannot, Advanced Cryptography techniques could in the future enable innovative ways of deriving benefit from large shared datasets, without compromising individuals’ privacy.

NCSC blog entry.

MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget