The AI Revolution in Coding: The divide between Winners, and Losers

The AI Revolution in Coding: The divide between Winners, and Losers

Introduction: The Promise and Peril of Generative AI for Engineers and Developers

In a digital age teeming with possibilities, the allure of Generative AI is hard to ignore. The technology asks us by its very nature to imagine a world capable of transforming a coder into an architect, an engineer into a product designer, or a developer into a data scientist. Generative AI promises to be a great equalizer, turning anyone with a laptop and an internet connection into a multi-disciplinary expert.

While traditional AI models operate within the confines of predefined algorithms, churning out expected results, Generative AI is the maverick in the room. It's the algorithmic virtuoso that not only mimics human-like problem-solving but generates entirely new ideas, code, or even art. Think of it as the modern-day philosopher's stone, poised to turn the mundane into the extraordinary.

Yet, much like the mythical philosopher's stone, the transformative power of Generative AI comes with its own set of hazards—especially for the unprepared. Far from being a one-click solution to mastery, it can become a trap for novices, a maze of complexities that exacerbate the digital divide between experts and beginners. Before we rush to embrace this revolution, we need to scrutinize the landscape, understanding both its transformative potential and inherent limitations.

This article aims to delve deep into this multifaceted narrative, examining how Generative AI can be a boon and a burden, simplifying tasks while simultaneously creating a lopsided impact on the workforce. As we explore AI's integration into languages like Java, JavaScript, and Python, and its role in broader fields like data science and cloud infrastructure, we'll unravel the nuanced ways this technology is reshaping the engineering and developer ecosystem.


Modern Alchemy: The Promise and Pitfalls in Practice

Article content

Generative AI's allure is tantalizing—making us wonder, can the average person really build an application with something like ChatGPT (ChatGPT refers to ChatGPT4 in all mentions throughout this article.)?

The Short Answer: It's Complicated

Guides on developing apps have existed for years, but ChatGPT and similar platforms offer unprecedented ease. Yet, this ease comes with strings attached. As the complexity of your project rises, so do the risks of generating erroneous or misleading code.

Let's take Nick Babich's exploration as an example. While ChatGPT assisted in designing his test app, the AI fell short when it came to executing the full project. This underscores a critical realization: AI is a guide, but the journey is still human.

Industry Skepticism

Tech giants like Google and Samsung have expressed reservations about relying on AI for code generation. The concerns aren't merely about data security but also the quality and reliability of the AI-generated code. For example, Google advises caution when using AI platforms, not only external tools like ChatGPT but even their own, such as Bard.

The Developers Dilemma

Online platforms like Stack Overflow further illustrate the issue. A temporary ban on ChatGPT responses came about due to concerns over the accuracy of AI-generated content. The message is clear: while these tools offer the promise of increased productivity, they also introduce risks—especially for developers without the expertise to spot and correct AI-generated errors.

A Work in Progress

The integration of Generative AI into code development promises a revolution in productivity and democratization of the profession. The possibility for anyone, and everyone to become a programmer! Yet, as early interactions with platforms like ChatGPT show, it also brings challenges in ensuring code accuracy and reliability. The real-world application of Generative AI, for now, remains a blend of enormous potential and sobering limitations.


The Banality of Alchemy: Decoding the Variables

Article content
Image Generated by a Custom Trained Model

Putting aside that case of a person completely new to coding, let's consider the "best implementation of AI" as a functional assistant to developers. In this respect it becomes clear that its efficiency is influenced by specific determinants. Drawing on my experience managing development projects that employed AI, I've identified three critical factors that impact the technology's utility.

  1. The Dance Between Human-Centric Design, Frameworks, and DebuggingThe relationship between generative AI and a programming framework is multi-faceted, determined not only by the framework's human-readability but also by its debugging capabilities. A framework like Python, known for its readability, allows for more straightforward interaction with AI tools. However, it's not just about the ease of writing code; it's also about the ease of debugging it. Languages like Java offer robust debugging tools and extensive documentation, making it easier for AI tools to guide users through debugging steps effectively. This holistic approach to choosing a framework can significantly impact the AI's utility in real-world scenarios.
  2. Theoretical FoundationsIt's not just about knowing how to code; it's about understanding the theory behind it. For example, a developer with a deep understanding of data structures and algorithms will likely get more out of AI-assisted coding tools. A solid theoretical background allows for better communication with AI, enhancing its utility.
  3. Tolerance to VariabilitySome projects are more forgiving of errors than others. For instance, crafting a basic website might be less impacted by AI-generated errors compared to developing a healthcare application where errors can be costly or dangerous. It's vital to evaluate how crucial specific choices are to a project's outcome when considering AI assistance.

By scrutinizing these key factors, we can gain a more nuanced understanding of generative AI in coding. This can help both novice and seasoned developers, as well as decision-makers, navigate the evolving landscape of AI-assisted development.


From Human Thought to Programming Frameworks: The Complex Dance of Language, Flexibility, and Adaptation

Article content
Image Generated by a Custom Trained Model


The translation of human-centric ideas to machine-readable code is a complex endeavor. This complexity isn't just a function of the developer's skill but is deeply influenced by the choice of programming languages and frameworks. Let’s break this down using some of the most widely-used languages: Java, JavaScript, and Python.


The Java Paradox: Rigidity vs. Reliability

Take Java for instance, a language celebrated for its platform independence and robustness. It’s characterized by its strict object-oriented structure and often requires multiple lines of code to articulate a single functionality. This verbosity, while providing explicit clarity for human developers, can be a maze for AI to navigate. Each line and each clause increases the potential for generative errors, making the overall task of producing efficient and accurate Java code a complex endeavor for AI.

For instance, a team recently turned to ChatGPT for a seemingly simple task—changing the background color of a navigation bar in their Android application. Despite the relevant Java code being readily available, the AI got lost in the language’s intricacies. It made inappropriate suggestions, effectively creating more problems than solutions, what would have been a couple of minutes work, would turn into week’s work.

However, Java’s rigidity can be a virtue, too. Its mature ecosystem and detailed trace logs offer valuable guideposts for both human and AI developers. These traits can make AI-generated Java code more reliable, especially when it comes to debugging, suggesting that AI's effectiveness is multi dimensional rather than a binary measure.

Article content
AI Error at it's most benign: ChatGPT4 tends to forget imports. Here at a glance it is easy to spot the lack of a useState import. (But this can be debug incredibly easy.)

The JavaScript Ecosystem: A Double-Edged Sword

React Native takes React's challenges and amplifies them. In its most benign form, the AI might forget to include essential imports. For instance, I have seen the scenario where a developer consults ChatGPT for assistance in setting up a component with React Native. The AI generates a seemingly complete block of code, only for the seasoned eye to spot that useState is conspicuously missing from the imports. This is a simple mistake, and the trace console can easily direct the new developer as to what is wrong, but this is just most benign of cases.

At its worst, when dealing with complex logic, the AI's shortcomings become glaringly obvious. For example, here the developer asked for a solution for a guided tour of the application that should only appear under specific conditions. ChatGPT offered a code snippet where the useState for showTourPrompt is initially set to false and fails to account for this in the logic. As a result, the prompt is never triggered. Furthermore, the problem with the code was that the showTooltip state lacked the granularity needed for the application, because of this the end state would never “stay” triggered this, worst of all and quite predictably all of this happening without any errors. Here, the AI is not just failing to solve the problem but also introducing new ones, which speaks to the difficulties of using generative AI for intricate tasks.

Moreover, the landscape of React Native is ever-changing. Packages are frequently updated, rendering previous implementations obsolete and making constant learning a necessity. This is not just a problem for AI; even seasoned developers can find themselves struggling to catch up. Take the Google Sign-In package, for example. I've observed developers grappling for weeks with a sudden update that left them scrambling to adjust their codebases. AI, reliant as it is on training data that may not be up-to-date, can fall behind even more quickly, posing challenges for its effective utilization.

This brings us to an often-overlooked dimension of AI's limitations—its need for constant training and adaptation. React Native serves as an excellent example, with packages and APIs that are updated at a pace that even human developers find challenging to keep up with. For a machine learning model, the lag in training can result in outdated solutions, misleading guidance, and ultimately, lost time and resources.

In contrast, let's circle back to Java. While its verbosity and strict structure present challenges for AI generation, these features also serve as assets in certain contexts. Java's meticulous trace logs and mature ecosystem can provide a more controlled environment for AI. These trace logs serve as detailed guideposts for debugging, offering precise information on where something went wrong, thus making it easier to identify and rectify generative errors. Therefore, even as we critique Java's complexities, it's worth acknowledging that they also bring a level of rigor and predictability that can make generative AI tools like ChatGPT more reliable within its scope.

Article content
AI at it's average: Here the GPT4 refused to acknowledge the state of a few constants, showTourPrompt is set to false, so this would mean that it is never changed to true... A simple logic mistake that could take hours to fix.

Python: The Forgiving Ally

Python, on the other hand, seems almost tailor-made for AI assistance. Its user-friendly syntax allows for easier translation of human intent into code. While you may argue that Python isn’t fraught with AI challenges, the limitations actually lie in human articulation. The language’s simplicity often compensates for any vagueness in task description, making it more forgiving for a wider user base.

Python also offers detailed debugging messages that help both human developers and their AI counterparts. These features make Python a relatively safer playground for AI, so long as the user can articulate their needs reasonably well.

Adapting to the Evolving AI Landscape

Across Java, JavaScript, and Python, one commonality emerges: the ever-evolving landscape of AI. Continuous updates and training are crucial, whether you're navigating Java's strict rulebook, JavaScript's fast-paced ecosystem, or Python's forgiving nature. But remember, this is not the end of the road; it's merely the starting point for broader applications of AI.


A New Frontier: Beyond Coding to Infrastructure and Data Science

Article content
Image Generated by a Custom Trained Model

We've delved deep into the realm of coding, but what about the broader engineering field?

AI and Data Science: The Art of Choices

The smooth dance between Python and AI doesn't stop at coding. It waltzes right into data science. AI can handle data cleaning, and even implementing most models. Here the focus is in implementing as opposed to directing. The role of human expertise is critical when you're dealing with a craft that's as much art as it is science. A novice may find himself stuck, not just in interpreting results but in understanding what questions to ask in the first place. While the master will be able to offload much of the tasks that have become a completely unnecessary chore, and focus on the real human value added; the design, and the strategic choices.

The Cloud’s Hidden Complexities

Speaking of broader engineering, let's take a quick look at cloud services like AWS and GCP. AI can guide you through setting up VMs, container orchestration, and more. But scale is the curveball here. What's efficient for a pet project may become a thorny issue as you scale. AI advice often lacks the big-picture awareness, creating pitfalls in larger, more complex environments.

AI can't know the full context of your project. It may suggest a technically correct solution that doesn't fit the actual needs of your project, leading to more trouble down the road. This is particularly concerning because there are actually an inordinate amount of positions that rely solely on an engineer's ability to "do" something in the cloud. This has effectively been democratized by AI. AI will not only explain how to do it, but it will also walk you through the steps to do it.

What AI Doesn't Know Can Hurt You

In both of this cases the onus was on the implementation part, AI is surprisingly adept at this. The landscape is changing drastically, the coming generative AI is truly a revolution in how we do things, but it's not as radical as most would believe, but it does present dramatic problems for that need to be addressed not only by developers today, but also by companies as they adapt to the new reality.


Final Thoughts: AI's Double-Edged Sword—Democratization and Stratification

Article content
Image Generated by a Custom Trained Model

As our exploration has shown, AI’s integration into the worlds of coding, data science, and cloud infrastructure is a complex affair. We've navigated through Java’s rigidity, JavaScript’s changing ecosystems, and Python's accommodating nature. At this moment there is a paradoxical landscape.

AI is often lauded for making previously specialized tasks more approachable. True, you no longer need a deep dive into cloud architecture or Python libraries to accomplish some tasks. 

But here's the wrinkle: this simplification is not uniformly beneficial. For seasoned professionals, AI serves as an amplifier of existing skills, turning efficiency into mastery. But for those who are still learning the ropes, AI might create an illusion of competence that has its pitfalls. These were the jobs and tasks that taught you how to troubleshoot, how to think critically, how to adapt—skills that AI can't teach you.

This creates a lopsided evolution within the tech industry. We're not just seeing a democratization of skills; we're witnessing a kind of artificial inflation. Entry-level positions, once the training grounds for broader expertise, risk becoming less informative, less enriching, and perhaps even less available.

So, what’s the takeaway? For those who view their roles as a series of tasks to check off, you definitely should be worried. You're in a race with automation, and you're not winning.

But for those who engage deeply with technology, the advent of AI should be seen not as a limitation but as an open question: "What can I do now that was not possible before?" Answering that question—something AI still can’t do for you—is the key to standing out in an increasingly complex landscape. The nuances, the decisions, the strategy—these are your domain, not AI's.

Heidi Helle

Creative Leader | Branding | Strategy | UI & UX

1y

From where I've been observing this from, there has been lots of hype about the coding aspects now possible with AI, and some (often detached) conversations about the issues, but I hadn't yet read anything this in-depth about the topic! Very enlightening 👏 In addition to offering valuable insights for both seasoned developers and newcomers, I can see this holding relevance for educators, supervisors, managers, and of course the curious minds. Nice job, and thank you for sharing! (Next I'd want to see a comprehensive roadmap about making newcomers into masters aligned with this 💪😁 )

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics