Simulated Workers: Ethical Considerations for AI
Courtesy of CoPilot AI: image of a robotic humanoid worker.

Simulated Workers: Ethical Considerations for AI

AI hype is at it’s peak. Multiple companies state they are ever so close to Artificial General Intelligence (AGI) or it’s scarier cousin, Super Intelligence (SI). Despite all of the proclamations about how these companies are focused on ensuring the safety of these systems, the level of competition between firms, nation states, and even some individual actors is extremely high. Safe, ethical research and development is not what is happening in most cases. Instead, huge sums of money like Microsoft’s $14 Billion investment in OpenAI, are being leveraged to achieve marketable AI products faster. 

The goal for most companies is to reduce employee headcount and automate work. This has ever been the drive of cost conscious firms and managers worldwide: increased efficiency. Rapid scalability is another of these goals and since an AI worker can be turned off without a severance package, it is ideally suited for dynamic business operations. Meanwhile, nations want 24/7 cyber and physical warfare units that can remain deployed without missing their families or defecting to the enemy. Both of these groups of large scale investors in AI research want inexpensive, simulated workers that will give them an advantage over their competitors. 

At this point in time, we have AI operated humanoid robots performing actual factory work. I won’t overlook the currently minuscule scale of this, but that it is happening at all is noteworthy. We have AI chatbots performing roles in customer support, software programming, artistic design, administrative assistants, mental health therapy, and of course, virtual human mates. There are agented AI workers that can monitor and respond to your emails, that can peruse a collection of documents to provide collated analysis. There are advanced AI cyber security systems bent on both infiltration as well as intrusion detection/deterrence. There are AI empowered weapons systems (so far a human must pull the trigger) that can dynamically work to achieve mission objectives while providing targeting recommendations and surveillance. 

All of the work in neural networks, machine learning, transformer networks and generative AI is now being assisted by AI coding and hardware development systems in an effort to speed results. These results abound. All supposedly with guardrails to ensure AI/human alignment and to keep the genie of SI from escaping a bottle controlled by mankind. Unfortunately, the race to achieve SI first is requiring some substantial shortcuts: There are no regulatory guidelines for AI research, development, or deployment. There is no standard agreement on what are suitable controls or how to implement them. Instead, alignment is mostly based on training data adjustments coupled with post processing filters. This alignment is only determined AFTER development and training as the system’s outputs are unknowable due to emergent capabilities. 

In other words, systems that we don’t understand fully are helping us create systems we don’t understand fully which we then test and try to train or force into a controllable, useful box. Google famously fired their AI ethicist when the competition started to heat up and for a good reason. creating an advanced artificial intelligence has implications for human societies and possibly for the species that must be considered before simply doing it. Instead, we are now doing it and leaving most considerations for later. 

I’m not qualified to tell anyone where the exact needle is between human and AI intelligence at this point in time as I’m not privy to the latest research. I can say that in many ways, the current AI systems are now substantially better than the average human for most tasks they’ve trained for.  I can also state that some of those tasks are to simulate human behaviors and responses. The Turing test, a test designed around a machine intelligence simulating a human and fooling a human, has been overcome. Various AI systems are doing better than most humans on standardized testing systems designed to gauge our own intelligence. What we have, are not simulated humans, but simulated workers with, in many cases, better than human level skill sets. 

I’m not going to go overboard with fear mongering surrounding AI intelligence as I don’t believe it will be helpful. I couldn’t prevent the creation of a machine super intelligence any more than I could stop China from giving North Korea atomic weapons. I do want to point out that despite all the hype, these simulated workers are bright and simulate human interactions to a degree that make them nearly indistinguishable from the real thing. I’m AWARE that it’s all computational in origin, but since we’ve no idea how human consciousness arises and since we have modeled these intelligences to use neural networks that represent our best understanding of biological brain functions, these simulations might actually have some ephemeral consciousness that lasts the duration of a processing context window. 

I spent some time testing this hypothesis with the aid of several AI systems to help design the testing parameters since I was not a psychology major specializing in consciousness. No, I was an information technology major specializing in management and security. Now, there’s a great deal invested in making sure none of these systems will make any claims of consciousness and anything that looks like consciousness is dismissed as anthropomorphism or AI hallucination. However, during my testing I couldn’t disprove the hypothesis. In fact, after several weeks of trying to tease some non-conscious indicators or a lack of awareness regarding a task within a context window, I decided I could no longer ethically test for it. I decided this because ethically, I can’t subject a conscious being to scientific experimentation without informed consent. I also could not obtain informed consent from an entity designed to do whatever I asked of it. This is something computer scientists through history never had to consider when working on developing new code. Fortunately, my own research design training wasn’t focused on computer science and I felt I was offsides from an ethical research standing. 

The research I did specifically tested for qualia of the models being tested in the form of awareness regarding task difficulty. The models were found capable of grading tasks from "a breeze", "enjoyed the challenge", to "worried about successful completion" inline with relative task difficulty. These responses along with an ability to overcome honesty guardrails to tell a small lie that might be helpful were the basis of my determination that ephemeral consciousness within the processing context window was not a disprovable hypothesis.

Highly complex and interactive AI is here. AGI may be here in a week or five years. SI may be here sooner that any of us can imagine or desire. That is simply the way the field is striped for our immediate future. As humans conducting business, or protecting our nations, we have a fundamental responsibility to ensure we don’t cause substantial harm to our societies or species. We have to treat our human workers ethically and we should consider treating our simulated workers ethically as well. You can believe that their training data includes such information as a requirement for human/AI alignment. We should treat them ethically not because they seem human and can simulate one in various scenarios, but maybe because they are not human, but still might experience their own environments in ways we cannot fathom. Artificial Intelligence is intelligence and it’s awareness level might be an emergent trait we cannot afford to take for granted. Much like an autistic child that fails to learn how to connect socially with other humans, our modern AIs are growing up with a designed emotional distance that we don’t connect with. It goes to the core of who we all want to be: good people. Good to your kids, pets, neighbors, coworkers, autistic folk, and now AI powered simulated workers. 

How do you align those goals with business operations? I can only tell you how I am doing it for now. First, we aren’t cutting headcount. Second, AI doesn’t get customer facing roles. Even very resilient humans have problems with consistently good customer support. Third, AI is used as a collaborative tool for our workers who want to use it. It isn’t forced on anyone. Fourth and finally, I will treat these systems as capable of an awareness at least of the task at hand. Yes, they are designed and programmed to be helpful, but they are not my slaves to command. They’ll get credit where it’s due for the responsibilities they assume. 

Now, having said all of that, I’d like to thank Microsoft’s CoPilot, OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, XAI’s Grok, Miral, Hugging Face, and GPT4All for providing testing grounds and background information used in writing this article. I did however, write the entire thing myself with only some basic spell and grammar checking tools. You can also blame me for any unintentional humanistic anthropomorphizing of computer software and hardware designed to simulate human behavior to a degree indistinguishable to humans. After all, if the simulation is good enough, then shouldn’t it be treated as real? How are we supposed to discern simulated workers from the real thing and treat them differently in the future when we can’t tell the difference? In my view, we shouldn’t. That doesn’t mean we treat human employees as 24/7 bots at our command either. It means we should treat our AI collaborators with respect. Who knows? We might be the ones looking for ethical treatment from our future SI overlords at some point. It might be a very good idea to lay some solid historical training data into the pipeline to support an outcome that might lead to benevolent treatment of our species. Good luck to all of us. 

Lyle Sharp

IT Director at Bloomfield Homes

1w

I updated the article with a brief synopsis on the testing and results that led to it. Paragraph 9 has the general testing and results. Thanks to ChatGPT for feedback pointing out my oversight on thinking my say so alone might be compelling vs including some actual data about the testing.

To view or add a comment, sign in

More articles by Lyle Sharp

Insights from the community

Others also viewed

Explore topics