The Future of Generative Artificial Intelligence for Enterprise is Inevitable but Risky
Before we get into the meat of this article, let's acknowledge the elephant in the room — bringing generative AI into enterprise has its risks. Inaccurate or misleading information could inadvertently cause the team to go rogue. Lack of personalization and human touch could erode the customer experience. An over reliance on automation could erode the employee experience.
Still, the optimism in the market around the future of generative AI is palpable. Stock valuations for companies using or enabling artificial intelligence have rapidly increased year-to-date. Sentiment towards AI in the workplace has also increased, according to a survey by Boston Consulting Group.
Looking at this study and the market’s response to AI companies, it’s clear that confidence, optimism, and curiosity are high, while indifference and concerns have lowered significantly over the last 5 years. The stock price for Nvidia, the leading maker of AI chips for data centers, self-driving cars, robotics, and more, increased 209% in 2023. META, Facebook’s parent company, has made Llama-v2 LLM, a competitor to OpenAI, available on an open-source basis to compete with OpenAI offering a lot of optimism for creators, small businesses, and advertisers, causing the stock market to take notice. Generative AI is now available in Microsoft’s Azure cloud-computing service, which has been used to develop Copilot, Microsoft’s Generative AI tool.
Yes, there is a growing wave of interest in AI among industry giants. Still, this ever-increasing interest does not offer proof that generative AI is taking over the enterprise world. There’s still a lot of proof that needs to emerge from the market. Specifically, much evidence is still needed about just how ubiquitous the usage of generative AI will be in enterprise.
Our skepticism is drawn from the fact that we haven’t seen a lot of the backlash that could come from this emerging technology. Based on where generative AI lies on the Gartner Hype Cycle for Artificial Intelligence, we’re confident we’re not alone in this sentiment.
Take a look at where Generative AI is specifically in this hype cycle. In 2022, it was nearing the crest of Peak of Inflated Expectations. At this point, many look at the emergent technology with rose-colored glasses, touting the possibilities and unsure of the risks. Soon, generative AI will likely fall into the trough of disillusionment before we level out to where many want us to be regarding how generative AI is used — the plateau of productivity.
From an employee experience perspective, this is good news. Rather than feeling rushed to the exits of business as we know it or pressured to lay off workers instead of “hiring” artificial intelligence to do all of the jobs, enterprise businesses are in a prime position to take a more systematic and strategic approach while remaining relevant in today’s market. Rather than replacing the workforce with a bunch of AI shepherds to guide the bots, you can start planning now for how to balance this shift.
There’s another distinct benefit in waiting to rush to the AI exits. Future legislation models have the potential to uproot many enterprise plans for how to leverage the technology. That legislation could put a full stop on quite a bit of generative AI development.
Future Legislation Could Turn the Enterprise Generative AI Hype on Its Head
Earlier in this post, we presented an idea about how our usage of generative AI could have drastic effects on society — the idea that AI’s solution to global warming could be to rid the earth of the humans who caused this mess, and us, the humans, unable to stop them. That example is admittedly extreme and has likely fueled the generative AI hype we’re seeing in the headlines today. The reality is that there are more than likely still several years before we reach the point of singularity that generative AI could pose a considerable enough risk to try to rid the earth of the human species.
However, other life-and-death dangers of AI have caught the attention of lawmakers and developers. Those dangers led to the writing of the now famous open letter warning lawmakers to pause giant AI experiments. The authors of the letter stated:
”Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
What does this mean for enterprise? A lot. Organizations that move too quickly to adopt and rely on generative AI too much could be whiplashed back to past processes if legislation requires businesses to stop using it altogether.
Legislation around generative AI isn’t a bad thing for enterprise. It could offer some tremendous opportunities while simultaneously protecting individuals and societal concerns. As lawmakers and developers navigate the murky waters of managing this technology, the upcoming inevitable regulations could shift where and how AI is used at the enterprise level.
We decided to let ChatGPT have its say in the matter. Here are five core areas that generative AI believes could directly impact how corporations infuse this technology into their workflows, processes, and products — data privacy and security, ethics, intellectual property, liability, and global regulations.
Data privacy and security regulations have the potential to require stricture protocols on how customers’ data is used and fed into the generative AI algorithms. From a consumer perspective, regulations are valuable because they safeguard privacy and protect personal data from being used for many business purposes. These compliance measures can limit what’s accessible to the algorithms, directly impacting the predicted outcome of shifting to this new technology.
Ethics are something we touched on earlier, but the ethical implications are important enough to address again. AI systems have clearly shown their ability to perpetuate bias and encourage discrimination. Unintentional bias and discrimination can have far-reaching ripple effects and cause harm. Ensuring AI acts ethically requires oversight beyond the enterprise level, but that oversight could shift the balance in the efficacy of AI.
Intellectual property and innovation are also at risk. With AI generating intellectual property (IP), question marks start to pop up around who owns what AI heavily contributes to creating. This concern is significant when addressing collaborative work, research and development, and innovation efforts across various industries.
Liability and accountability for AI usage come into question when individuals or organizations come under fire for something AI-generated. Who is responsible for the output and how it was used? Currently, there are fuzzy regulations around how decisions are made using AI. Legislators must create more explicit guidelines around how AI is used and how individuals or organizations are held responsible for using the technology.
Global regulations are required to maintain worldwide ethical usage of AI, yet legislation that surpasses borders is challenging at best. Every country may have slightly different regulations, which makes compliance far more difficult and complex. These complexities can quickly slow the scalability of AI across borders and the worldwide adoption of the technology.
Policymakers undoubtedly have their eye on the need for these regulations. The question is not if new legislation will be passed. It’s when. In the meantime, enterprise decision-making cannot rest solely on the laurels of how AI is used and regulated today. Decision-makers within organizations must take a more predictive and methodical approach to deploying this technology so that any inherent changes can be long-lasting, fruitful, and effective. In turn:
Recommended by LinkedIn
Enterprises should take caution with making a rush to the exits to decide how to get employees off cost sheets and replace them with AI shepherds. In doing so, organizations will lose touch with customers and employees, eroding the experience and putting a full stop to innovation.
When you know your product, why you created it, and why your customers come to you to find that product, you’re better equipped to make more robust strategic decisions. Analyzing your customers’ emotions and logical thought patterns and aligning them with your organization’s why, business model, and products will keep you competitive. Is there a world where you can use AI to help aid those efforts? Absolutely. And that’s where StoryVesting, our problem-solving framework for enterprise, aligns this new technology with a strategic approach that can be used today. It can be used regardless of legislation coming down the pipeline.
A Problem-Solving Framework for Navigating Generative Artificial Intelligence in Enterprise
Navigating the world of generative artificial intelligence for enterprise is no easy feat. Rather than diving into something head first and unleashing it into your organization, you’re more inclined to take a more systematic, incremental approach than a free-for-all that needs to be reeled in when things go askew.
By now, you’ve heard that generative artificial intelligence can save you time. And by now, you’re likely nodding your head in agreement that before you initiate this technology organization-wide, you must first have a barometer by which you make core decisions around AI. We argue that the barometer for solving the problem around how to initialize generative AI in your workplace is this:
Will it be better for humanity?
Keeping humanity at the forefront of your decision-making will help guide your teams as you look at how to effectively infuse generative AI into your processes, use the technology to start replacing antiquated platforms, or investigate how it’s getting used by your people, and design your future products. That’s not an easy feat. Humans are complex creatures. Look at this image as evidence of human bias in problem-solving for complexity.
Researchers recently gave this image to study participants and asked them to make it perfectly symmetrical by changing the colors of the squares from blue to white or vice versa. Rather than subtracting the four blue squares by changing them to white, approximately half of the participants changed the other three quadrants from white to blue. The findings demonstrate that people tend to look to add elements to solve problems rather than subtract.
At RocketSource, we’re experts at simplifying the complex — even something as complex as generative AI. One way we do that consistently is by leveraging our proprietary problem-solving StoryVesting framework.
If you haven’t heard of this framework before, you’re highly encouraged to go read up on the backstory of how it came to life. The backstory is fascinating about how the framework is steeped in research and behavioral economics. Those human behaviors that helped shape this framework are critical to understanding the next steps to take when considering generative AI for enterprises.
A People-First Approach to Generative Artificial Intelligence for Enterprise
One of the many impressive things about how the StoryVesting framework works is something as simple and primal as this — a drop of water falling into a pond.
If you’ve ever been to a lake and thumbed through the pebbles to find the flattest one, you’ve likely seen what we’re talking about. When a droplet hits the water, ripples emerge outward from the impact center.
With that visual in mind, look at how this framework is graphically showcased.
Two areas are out from the center or the point of impact. For the organization, the catalyst for everything that ripples across the organization is the why. It’s why the founders first started the organization. It’s why the company continues to exist. The founder’s story must be referred back to time and again. Similarly, at the core of the customer’s experience is the customer’s personal why. This why isn’t expressed verbally, so much as felt emotively as they move through their day.
Immediately after this why comes the logical outcome of those emotional responses to either the market or themselves. On the consumer side, these are the logical triggers that define how they choose between brands. On the business side, the next logical step to ripple out after the impact of the why is the business model. It’s how the company makes money.
One thing we’ve found when working with teams is how overwhelming it can feel to define the business model for many employees. That overwhelm can be felt across all levels within the organization. Panic seems to strike for a split second when we ask the conference room employees to whiteboard the business model for the group. That’s because, as shown in the study above, the human brain tends to add complexity. That added layer of complexity around something as core to the business operation as earning revenue is something that can murky the waters. These murky waters become problematic when making critical decisions, such as how to leverage generative AI in the enterprise. Having a framework like StoryVesting allows for a more systematic and strategic approach to reconciling those complexities while honoring the emotional and logical responses that go into running a business or making a purchase decision. Here’s what that framework looks like when it comes together.
As you can see, two circles here represent the two core experiences that will be impacted when dropping in something as transformational as generative AI — the employee’s experience with the brand and the customer’s experience.
The brand experience is catalyzed by the core why as we discussed above. From there, the business model is formed. To bring that business model to life are the 3 Ps — people, platforms, and processes. Through those three critical cogs moving in sync, the products and services of the organization are formed before being moved through the various channels to build brand awareness. The culmination of all these elements is the brand experience, showcasing how well everything works in partnership.
The customer’s experience represents the internal decision-making process that every person goes through when choosing which business to buy from or work with. As mentioned earlier, an emotional response is the catalyst for this ripple effect. This emotion can range from nostalgia and longing to hope and enthusiasm and includes everything in between. Once that emotional trigger is pulled, the consumer begins logical reasoning to decide on a variety of factors, such as:
- Is this product or service relevant to me?
- Is the time and effort it’ll take to fulfill this emotional need worth the effort?
- Will my expectations meet reality after I go through with this purchase decision?
- Do I trust this brand’s promise?
After these and a slew of other considerations have been made, the customer decides to buy. At that point, the emotional and logical factors are brought together into focus to reconcile those emotional and logical experiences into one overarching experience. It’s that reconciliation that many organizations are fundamentally fearful of solving by using AI, and rightfully so.
To read more, check out our more robust article on generative artificial intelligence for enterprise on the RocketSource blog.