December 23, 2024
“In 2025, we expect to see more AI-driven cyberthreats designed to evade detection, including more advanced evasion techniques bypassing endpoint detection and response (EDR), known as EDR killers, and traditional defences,” Khalid argues. “Attackers may use legitimate applications like PowerShell and remote access tools to deploy ransomware, making detection harder for standard security solutions.” On a more frightening note, Michael Adjei, director of systems engineering at Illumio, believes that AI will offer somewhat of a field day for social engineers, who will trick people into actually creating breaches themselves: “Ordinary users will, in effect, become unwitting participants in mass attacks in 2025. ... “With greater adoption of AI will come increased cyberthreats, and security teams need to remain nimble, confident and knowledgeable.” Similarly, Britton argues that teams “will need to undergo a dedicated effort around understanding how [AI] can deliver results”. “To do this, businesses should start by identifying which parts of their workflows are highly manual, which can help them determine how AI can be overlaid to improve efficiency. Key to this will be determining what success looks like. Is it better efficiency? Reduced cost?”
The chief argument for robots with human characteristics is a functional one: Our homes and workplaces were built by and for humans, so a robot with a humanlike form will navigate them more easily. But Hoffman believes there’s another reason: “Through this kind of humanoid design, we are selling a story about this robot that it is in some way equivalent to us or to the things that we can do.” In other words, build a robot that looks like a human, and people will assume it’s as capable as one. In designing Alfie’s physical appearance, Prosper has borrowed some aspects of typical humanoid design but rejected others. Alfie has wheels instead of legs, for example, as bipedal robots are currently less stable in home environments, but he does have arms and a head. The robot will be built on a vertical column that resembles a torso; his specific height and weight are not yet public. He will have two emergency stop buttons. Nothing about Alfie’s design will attempt to obscure the fact that he is a robot, Lewis says. “The antithesis [of trustworthiness] would be designing a robot that’s intended to emulate a human … and its measure of success is based on how well it has deceived you,” he told me. “Like, ‘Wow, I was talking to that thing for five minutes and I didn’t realize it’s a robot.’ That, to me, is dishonest.”
As we move into 2025, the big stories that dominated 2024 will continue to evolve. We can expect AI—particularly generative AI—to become even more deeply ingrained in the DevOps toolchain. Prompt engineering for AI models will likely emerge as a specialized skill, just as writing Docker files was a skill set that distinguished DevOps engineers a decade ago. Agentic AI will become the norm with teams of agents taking on the tasks that lower level workers once performed. On the policy side, escalating regulatory demands will push enterprises to adopt more stringent compliance frameworks, integrating AI-driven compliance-as-code tools into their pipelines. Platform engineering will mature, focusing on standardization and the creation of “golden paths” that offer best practices out of the box. We may also see a consolidation of DevOps tool vendors as the market seeks integrated, end-to-end platforms over patchwork solutions. The focus will be on usability, quality, security and efficiency—attributes that can only be realized through cohesive ecosystems rather than fragmented toolchains. Sustainability will also factor into 2025’s narrative. As environmental concerns shape global economic policies and public sentiment, DevOps teams will take resource optimization more seriously.
Recommended by LinkedIn
Vision and data derived from videos have become integral to numerous industries, with machine vision playing a crucial role in automating business processes. For instance, automatic inventory management, often supported by robots, is transitioning from experimental to mainstream. Machine vision also enhances security and safety by replacing human monitoring with machines that operate around the clock, offering greater accuracy at a lower cost. On the consumer front, virtual try-ons and AI-assisted mirrors have become standard features in reputable retail outlets, both in physical stores and online platforms. ... Traditional boundaries of security, which once focused on standard data security, governance, and IT protocols, are now fluid and dynamic. The integration of AI, data analytics, and machine learning has created diverse contexts for output consumption, resulting in new business operations around model simulations and decision-making related to model pipelines. These operations include processes like model publishing, hyperparameter observability, and auditing model reasoning, all of which push the boundaries of AI responsibility.
None of the lawyers, though, discussed who is at fault if the code generated by an AI results in some catastrophic outcome. For example: The company delivering a product shares some responsibility for, say, choosing a library that has known deficiencies. If a product ships using a library that has known exploits and that product causes an incident that results in tangible harm, who owns that failure? The product maker, the library coder, or the company that chose the product? Usually, it's all three. ... Now add AI code into the mix. Clearly, most of the responsibility falls on the shoulders of the coder who chooses to use code generated by an AI. After all, it's common knowledge that the code may not work and needs to be thoroughly tested. In a comprehensive lawsuit, will claimants also go after the companies that produce the AIs and even the organizations from which content was taken to train those AIs (even if done without permission)? As every attorney has told me, there is very little case law thus far. We won't really know the answers until something goes wrong, parties wind up in court, and it's adjudicated thoroughly. We're in uncharted waters here.
Dependencies are the hidden traps of software architecture. When your system is littered with them — whether they’re external libraries, tightly coupled modules, or interdependent microservices — it creates a tangled web that’s hard to navigate. They make the system difficult to debug locally. Every change risks breaking something else. Deployments take more time, troubleshooting takes longer, and cascading failures are a real threat. The result? Your team spends more time toiling and less time innovating. ... Reducing dependencies doesn’t mean eliminating them entirely or splitting your system into nanoservices. Overcorrecting by creating tiny, hyper-granular services might seem like a solution, but it often leads to even greater complexity. In this scenario, you’ll find yourself managing dozens — or even hundreds — of moving parts, each requiring its own maintenance, monitoring, and communication overhead. Instead, aim for balance. Establish boundaries for your microservices that promote cohesion, avoiding unnecessary fragmentation. Strive for an architecture where services interact efficiently but aren’t overly reliant on each other, which increases the flexibility and resilience of your system.