December 23, 2024

December 23, 2024

‘Orgs need to be ready’: AI risks and rewards for cybersecurity in 2025

“In 2025, we expect to see more AI-driven cyberthreats designed to evade detection, including more advanced evasion techniques bypassing endpoint detection and response (EDR), known as EDR killers, and traditional defences,” Khalid argues. “Attackers may use legitimate applications like PowerShell and remote access tools to deploy ransomware, making detection harder for standard security solutions.” On a more frightening note, Michael Adjei, director of systems engineering at Illumio, believes that AI will offer somewhat of a field day for social engineers, who will trick people into actually creating breaches themselves: “Ordinary users will, in effect, become unwitting participants in mass attacks in 2025. ... “With greater adoption of AI will come increased cyberthreats, and security teams need to remain nimble, confident and knowledgeable.” Similarly, Britton argues that teams “will need to undergo a dedicated effort around understanding how [AI] can deliver results”. “To do this, businesses should start by identifying which parts of their workflows are highly manual, which can help them determine how AI can be overlaid to improve efficiency. Key to this will be determining what success looks like. Is it better efficiency? Reduced cost?”


Will we ever trust robots?

The chief argument for robots with human characteristics is a functional one: Our homes and workplaces were built by and for humans, so a robot with a humanlike form will navigate them more easily. But Hoffman believes there’s another reason: “Through this kind of humanoid design, we are selling a story about this robot that it is in some way equivalent to us or to the things that we can do.” In other words, build a robot that looks like a human, and people will assume it’s as capable as one. In designing Alfie’s physical appearance, Prosper has borrowed some aspects of typical humanoid design but rejected others. Alfie has wheels instead of legs, for example, as bipedal robots are currently less stable in home environments, but he does have arms and a head. The robot will be built on a vertical column that resembles a torso; his specific height and weight are not yet public. He will have two emergency stop buttons. Nothing about Alfie’s design will attempt to obscure the fact that he is a robot, Lewis says. “The antithesis [of trustworthiness] would be designing a robot that’s intended to emulate a human … and its measure of success is based on how well it has deceived you,” he told me. “Like, ‘Wow, I was talking to that thing for five minutes and I didn’t realize it’s a robot.’ That, to me, is dishonest.”


My Personal Reflection on DevOps in 2024 and Looking Ahead to 2025

Article content

As we move into 2025, the big stories that dominated 2024 will continue to evolve. We can expect AI—particularly generative AI—to become even more deeply ingrained in the DevOps toolchain. Prompt engineering for AI models will likely emerge as a specialized skill, just as writing Docker files was a skill set that distinguished DevOps engineers a decade ago. Agentic AI will become the norm with teams of agents taking on the tasks that lower level workers once performed. On the policy side, escalating regulatory demands will push enterprises to adopt more stringent compliance frameworks, integrating AI-driven compliance-as-code tools into their pipelines. Platform engineering will mature, focusing on standardization and the creation of “golden paths” that offer best practices out of the box. We may also see a consolidation of DevOps tool vendors as the market seeks integrated, end-to-end platforms over patchwork solutions. The focus will be on usability, quality, security and efficiency—attributes that can only be realized through cohesive ecosystems rather than fragmented toolchains. Sustainability will also factor into 2025’s narrative. As environmental concerns shape global economic policies and public sentiment, DevOps teams will take resource optimization more seriously. 


From Invisible UX to AI Governance: Kanchan Ray, CTO, Nagarro Shares his Vision for a Connected Future

Vision and data derived from videos have become integral to numerous industries, with machine vision playing a crucial role in automating business processes. For instance, automatic inventory management, often supported by robots, is transitioning from experimental to mainstream. Machine vision also enhances security and safety by replacing human monitoring with machines that operate around the clock, offering greater accuracy at a lower cost. On the consumer front, virtual try-ons and AI-assisted mirrors have become standard features in reputable retail outlets, both in physical stores and online platforms. ... Traditional boundaries of security, which once focused on standard data security, governance, and IT protocols, are now fluid and dynamic. The integration of AI, data analytics, and machine learning has created diverse contexts for output consumption, resulting in new business operations around model simulations and decision-making related to model pipelines. These operations include processes like model publishing, hyperparameter observability, and auditing model reasoning, all of which push the boundaries of AI responsibility.


If your AI-generated code becomes faulty, who faces the most liability exposure?

None of the lawyers, though, discussed who is at fault if the code generated by an AI results in some catastrophic outcome. For example: The company delivering a product shares some responsibility for, say, choosing a library that has known deficiencies. If a product ships using a library that has known exploits and that product causes an incident that results in tangible harm, who owns that failure? The product maker, the library coder, or the company that chose the product? Usually, it's all three. ... Now add AI code into the mix. Clearly, most of the responsibility falls on the shoulders of the coder who chooses to use code generated by an AI. After all, it's common knowledge that the code may not work and needs to be thoroughly tested. In a comprehensive lawsuit, will claimants also go after the companies that produce the AIs and even the organizations from which content was taken to train those AIs (even if done without permission)? As every attorney has told me, there is very little case law thus far. We won't really know the answers until something goes wrong, parties wind up in court, and it's adjudicated thoroughly. We're in uncharted waters here. 


5 Signs You’ve Built a Secretly Bad Architecture (And How to Fix It)

Dependencies are the hidden traps of software architecture. When your system is littered with them — whether they’re external libraries, tightly coupled modules, or interdependent microservices — it creates a tangled web that’s hard to navigate. They make the system difficult to debug locally. Every change risks breaking something else. Deployments take more time, troubleshooting takes longer, and cascading failures are a real threat. The result? Your team spends more time toiling and less time innovating. ... Reducing dependencies doesn’t mean eliminating them entirely or splitting your system into nanoservices. Overcorrecting by creating tiny, hyper-granular services might seem like a solution, but it often leads to even greater complexity. In this scenario, you’ll find yourself managing dozens — or even hundreds — of moving parts, each requiring its own maintenance, monitoring, and communication overhead. Instead, aim for balance. Establish boundaries for your microservices that promote cohesion, avoiding unnecessary fragmentation. Strive for an architecture where services interact efficiently but aren’t overly reliant on each other, which increases the flexibility and resilience of your system.

Read more here ...

To view or add a comment, sign in

More articles by Kannan Subbiah

  • May 03, 2025

    May 03, 2025

    Why agentic AI is the next wave of innovation AI agents have become integral to modern enterprises, not just enhancing…

  • April 30, 2025

    April 30, 2025

    Common Pitfalls and New Challenges in IT Automation “You don’t know what you don’t know and can’t improve what you…

  • April 29, 2025

    April 29, 2025

    AI and Analytics in 2025 — 6 Trends Driving the Future As AI becomes deeply embedded in enterprise operations and…

  • April 28, 2025

    April 28, 2025

    Researchers Revolutionize Fraud Detection with Machine Learning Machine learning plays a critical role in fraud…

  • April 27, 2025

    April 27, 2025

    7 key strategies for MLops success Like many things in life, in order to successfully integrate and manage AI and ML…

  • April 25, 2025

    April 25, 2025

    Revolutionizing Application Security: The Plea for Unified Platforms “Shift left” is a practice that focuses on…

  • April 24, 2025

    April 24, 2025

    Algorithm can make AI responses increasingly reliable with less computational overhead The algorithm uses the structure…

  • April 23, 2025

    April 23, 2025

    MLOps vs. DevOps: Key Differences — and Why They Work Better Together Arguably, the greatest difference between DevOps…

  • April 22, 2025

    April 22, 2025

    Open Source and Container Security Are Fundamentally Broken Finding a security vulnerability is only the beginning of…

  • April 21, 2025

    April 21, 2025

    Two ways AI hype is worsening the cybersecurity skills crisis Another critical factor in the AI-skills shortage…

Insights from the community

Others also viewed

Explore topics