Tech Wavo
  • Home
  • Technology
  • Computers
  • Gadgets
  • Mobile
  • Apps
  • News
  • Financial
  • Stock
Tech Wavo
No Result
View All Result

The largest AI security risks aren’t in code, they’re in culture

Tech Wavo by Tech Wavo
November 5, 2025
in Computers
0



A resilient AI system goes beyond technical performance. It reflects the culture of the team behind it.

And, as AI becomes more embedded with businesses, and used by employees and the public, the systems we rely on are becoming harder to govern.

Darren Lewis

Social Links Navigation

The risks AI introduces aren’t usually dramatic or sudden. They emerge gradually, through unclear ownership, unmanaged updates, lack of training, and fragmented decision-making.


You may like

Security, in this context, depends less on the code itself and more on the habits and coordination of the teams building around it.

Reframing AI security

When AI security is discussed, the focus tends to sit squarely on the technical layer, on clean datasets, robust algorithms, and well-structured models. It’s an understandable instinct. These are visible, tangible components, and they matter.

But in practice, most risks accumulate not from flaws in logic but from gaps in coordination. They tend to build slowly when updates aren’t logged, when models move between teams without context, or when no one is quite sure who made the last change.

The UK’s Cyber Security and Resilience Bill is a step forward in formalizing how digital infrastructure should be secured. It introduces new requirements for operational assurance, continuous monitoring, and incident response, especially for service providers supporting critical systems.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

But while the Bill sharpens expectations around infrastructure, it has yet to capture how AI is actually developed and maintained in practice.

In sectors like healthcare and finance, models are already influencing high-stakes decisions. And they are often built in fast-moving environments where roles shift, tools evolve, and governance does not always keep pace.

Where risk tends to build up

AI development rarely stays within a single team. Models are retrained, reused, and adapted as needs shift. That flexibility is part of their value, but it also adds layers of complexity.


You may like

Small changes can have wide-reaching effects. One team might update the training data to reflect new inputs. Another might adjust a threshold to reduce false positives. A third might deploy a model without checking how it was configured before.

None of these decisions are inherently wrong. But when teams can’t trace a decision back to its origin, or no one is sure who approved a change, the ability to respond quickly is lost.

These are not faults of code or architecture, but are signs that the way teams build, adapt, and hand over systems hasn’t kept pace with how widely those systems are now used. When working culture falls behind, risk becomes harder to see, and therefore harder to contain.

Turning culture into a control surface

If risk accumulates in day-to-day habits, resilience must be built in the same place. Culture is more than an enabler of good practice, it becomes a mechanism for maintaining control as systems scale.

That principle is reflected in regulation. The EU AI Act sets requirements for high-risk systems, including conformity assessments and voluntary codes of practice, but much of the responsibility for embedding governance into everyday routines still rests with the organizations deploying them.

In the UK, the Department for Science, Innovation and Technology’s AI Cyber Security Code of Practice follows a similar approach, pairing high-level principles with practical guidance that helps businesses turn policy into working norms.

Research and recognition programs point in the same direction. Studies of real-world AI development, such as the UK’s LASR initiative, show how communication, handovers, and assumptions between teams shape trust as much as the models themselves.

Initiatives like the National AI Awards then highlight organizations that are putting cultural governance into practice and establishing clearer standards of maturity.

For businesses, the task now is to make cultural clarity a more integrated part of operational design. The more that teams can rely on shared norms, visible ownership, and consistent decision-making, the more resilient their AI systems will become over time.

Looking ahead

As AI becomes part of everyday decision-making, leadership focus must shift from individual model performance to the wider environment those systems operate in.

That means moving beyond project-level fixes and investing in the connective tissue between teams, the routines, forums, and habits that give AI development the structure to scale safely.

Building that maturity takes time, but it starts with clarity. Clarity of ownership, of change, and of context.

The organizations that make progress will be those that treat culture not as a soft skill, but as a working asset, something to be reviewed, resourced, and continuously improved.

This cultural structure is what will ultimately shape security. Through embedded habits that make risk easier to see, surface, and act on as AI becomes more pivotal to how today’s businesses operate.

We’ve featured the best online cybersecurity course.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Previous Post

Honda shows off three new EVs in Tokyo, but its US plans are getting more tepid

Next Post

Former FTC Chair Lina Khan will help Zohran Mamdani build his new administration

Next Post
Former FTC Chair Lina Khan will help Zohran Mamdani build his new administration

Former FTC Chair Lina Khan will help Zohran Mamdani build his new administration

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Discord’s Family Center update now lets parents monitor weekly purchases

by Tech Wavo
November 6, 2025
0
Discord’s Family Center update now lets parents monitor weekly purchases
Computers

Discord has rolled out updates to its Family Center, giving guardians more insights into their teens’ usage patterns, including purchases,...

Read more

Google’s theme packs are nearly ready to style Pixels with one tap

by Tech Wavo
November 6, 2025
0
Google’s theme packs are nearly ready to style Pixels with one tap
Mobile

Brady Snyder / Android AuthorityTL;DR Google is bringing theme packs to Pixel phones with a new app in the Google...

Read more

Gabb Phone 4 Pro – Teens Smartphone w/ Safety Features & Parental Controls

by Tech Wavo
November 6, 2025
0
Gabb Phone 4 Pro – Teens Smartphone w/ Safety Features & Parental Controls
Gadgets

In today’s digital age, our kids’ childhood is much different than what it used to be. With smartphones becoming an...

Read more

The Counterintuitive Path to AI Adoption – O’Reilly

by Tech Wavo
November 6, 2025
0
The Counterintuitive Path to AI Adoption – O’Reilly
News

The following article originally appeared on Gradient Flow and is being reposted here with the author’s permission.We’re living through a...

Read more

Site links

  • Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of use
  • Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms of use

No Result
View All Result
  • Home
  • Technology
  • Computers
  • Gadgets
  • Mobile
  • Apps
  • News
  • Financial
  • Stock